Jack Clark

Jack Clark

06-08-2022

18:13

One like = one spicy take about AI policy.

A surprisingly large fraction of AI policy work at large technology companies is about doing 'follow the birdie' with government - getting them to look in one direction, and away from another area of tech progress

The vast majority of AI policy people I speak to seem to not be that interested in understanding the guts of the technology they're doing policy about

China has a much more well-developed AI policy approach than that in Europe and the United States. China is actually surprisingly good at regulating things around, say, synthetic media.

There is not some secret team working for a government in the West building incredibly large-scale general models. There are teams doing applied work in intelligence.

The real danger in Western AI policy isn't that AI is doing bad stuff, it's that governments are so unfathomably behind the frontier that they have no notion of _how_ to regulate, and it's unclear if they _can_

Many AI policy teams in industry are constructed as basic the second line of brand defense after the public relations team. A huge % of policy work is based around reacting to perceived optics problems, rather than real problems.

Many of the problems in AI policy stem from the fact that economy-of-scale capitalism is, by nature, anti-democratic, and capex-intensive AI is therefore anti-democratic. No one really wants to admit this. It's awkward to bring it up at parties (I am not fun at parties).

Lots of AI policy teams are disempowered because they have no direct technical execution ability - they need to internally horse-trade to get anything done, so they aren't able to do much original research, and mostly rebrand existing projects.

Most technical people think policy can't matter for AI, because of the aforementioned unfathomably-behind nature of most governments. A surprisingly large % of people who think this also think this isn't a problem.

A surprisingly large amount of AI policy is illegible, because mostly the PR-friendly stuff gets published, and many of the smartest people working in AI policy circulate all their stuff privately (this is a weird dynamic and probably a quirk/departure from norm)

Many of the immediate problems of AI (e.g, bias) are so widely talked about because they're at least somewhat tractable (you can make measures, you can assess, you can audit). Many of the longterm problems aren't discussed because no one has a clue what to do about them.

The notion of building 'general' and 'intelligent' things is broadly frowned on in most AI policy meetings. Many people have a prior that it's impossible for any machine learning-based system to be actually smart. These people also don't update in response to progress.

Many technologists (including myself) are genuinely nervous about the pace of progress. It's absolutely thrilling, but the fact it's progressing at like 1000X the rate of gov capacity building is genuine nightmare fuel.

The default outcome of current AI policy trends in the West is we all get to live in Libertarian Snowcrash wonderland where a small number of companies rewire the world. Everyone can see this train coming along and can't work out how to stop it.

Like 95% of the immediate problems of AI policy are just "who has power under capitalism", and you literally can't do anything about it. AI costs money. Companies have money. Therefore companies build AI. Most talk about democratization is PR-friendly bullshit that ignores this.

Some companies deliberately keep their AI policy teams AWAY from engineers. I regularly get emails from engineers at $bigtech asking me to INTRO THEM to their own policy teams, or give them advice on how to raise policy issues with them.

Sometimes, bigtech companies seem to go completely batshit about some AI policy issue, and 90% of the time it's because some internal group has figured out a way to run an internal successful political campaign and the resulting policy moves are about hiring retention.

Some people who work on frontier AI policy think a legitimate goal of AI policy should be to ensure governments (especially US government) has almost no understanding of rate of progress at the frontier, thinking it safer for companies to rambo this solo (I disagree with this).

It's functionally impossible to talk about the weird (and legitimate) problems of AI alignment in public/broad forums (e.g, this twitter thread). It is like signing up to be pelted with rotten vegetables, or called a bigot. This makes it hard to discuss these issues in public.

AI really is going to change the world. Things are going to get 100-1000X cheaper and more efficient. This is mostly great. However, historically, when you make stuff 100X-1000X cheaper, you upend the geopolitical order. This time probably won't be different.

People wildly underestimate how much influence individuals can have in policy. I've had a decent amount of impact by just turning up and working on the same core issues (measurement and monitoring) for multiple years. This is fun, but also scares the shit out of me.

AI policy is anti-democratic for the same reasons as large-scale AI being anti-democratic - companies have money, so they can build teams to turn up at meetings all the time and slowly move the overton window. It's hard to do this if it's not your dayjob.

Lots of the seemingly most robust solutions for reducing AI risk require the following things to happen: full information sharing on capabilities between US and China and full monitoring of software being run on all computers everywhere all the time. Pretty hard to do!

It's likely that companies are one of the most effective ways to build decent AI systems - companies have money, can move quickly, and have fewer stakeholders than governments. This is a societal failing and many problems in AI deployment stem from this basic fact.

Most technologist feel like they can do anything wrt AI because governments (in West) have shown pretty much zero interest in regulating AI, beyond punishing infractions in a small number of products. Many orgs do skeezy shit under the radar and gamble no one will notice.

Discussions about AGI tend to be pointless as no one has a precise definition of AGI, and most people have radically different definitions. In many ways, AGI feels more like a shibboleth used to understand if someone is in- or out-group wrt some issues.

The concept of 'information hazards' regularly ties up some of the smartest people and causes them to become extremely unproductive and afraid to talk or think about certain ideas. It's a bit of a mind virus.

At the same time, there are certain insights which can seem really frightening and may actually be genuine information hazards, and it's very hard to understand when you're being appropriately paranoid, and when you're being crazy (see above).

It's very hard to bring the various members of the AI world together around one table, because some people who work on longterm/AGI-style policy tend to ignore, minimize, or just not consider the immediate problems of AI deployment/harms. V alienating.

Most people working on AI massively discount how big of a deal human culture is for the tech development story. They are aware the world is full of growing economic inequality, yet are very surprised when people don't welcome new inequality-increasing capabilities with joy.

People don't take guillotines seriously. Historically, when a tiny group gains a huge amount of power and makes life-altering decisions for a vast number of people, the minority gets actually, for real, killed. People feel like this can't happen anymore.

IP and antitrust laws actively disincentivize companies from coordinating on socially-useful joint projects. The system we're in has counter-incentives for cooperation.

Lots of AI research papers are missing really important, seemingly inconsequential details, that are actually fundamental to how the thing works. Most people in most labs spend time spotting the 'hidden facts' in papers from other labs. It's a very weird game to play.

Many people developing advanced AI systems feel they're in a race with one another. Half of these people are desperately trying to change the race dynamics to stop the race. Some people are just privately trying to win.

In AI, like in any field, most of the people who hold power are people who have been very good at winning a bunch of races. It's hard for these people to not want to race and they privately think they should win the race.

Doing interdisciplinary work in AI policy is incredibly difficult. Corporate culture pushes against it, and to bring on actual subject-matter experts you need to invest tons of $$$ in systems to make your infrastructure easier to deal with. It requires huge, sustained effort.

Most universities are wildly behind the private labs in terms of infrastructure and scale. Huge chunks of research have, effectively, become private endeavors due to the cost of scale. This is also making universities MORE dependent on corporations (e.g for API access).

Many people who stay in academic specialize in types of research that doesn't require huge experimental infrastructure (e.g, multiple-thousand GPU clusters). This means the next generation of technologists are even more likely to go to private sector to study at-scale problems.

Scale is genuinely important for capabilities. You literally can't study some problems if you futz around with small models. This also means people who only deal with small models have broken assumptions about how the tech behaves at scale.

At the different large-scale labs (where large-scale = multiple thousands of GPUs), there are different opinions among leadership on how important safety is. Some people care about safety a lot, some people barely care about it. If safety issues turn out to be real, uh oh!

Working in AI right now feels like how I imagine it was to be a housing-debt nerd in the run-up to the global financial crisis. You can sense that weird stuff is happening in the large, complicated underbelly of the tech ecosystem.

AI policy can make you feel completely insane because you will find yourself repeating the same basic points (academia is losing to industry, government capacity is atrophying) and everyone will agree with you and nothing will happen for years.

AI policy can make you feel completely exhilarated because sometimes you meet people who have a) vision, b) power, and c) an ability to stay focused. You can do tremendous, impactful work if you find these people, and finding them is 50% of my job. (This thread is a beacon!)

'Show, don't tell' is real. If you want to get an idea across, demo a real system, live. This will get you 100X the impact of some PR-fied memo. Few people seem to do live demos because lack of choreography freaks them out. But policymakers hate dog and pony shows. What gives?

AI development is a genuine competition among nations. AI is crucial to future economic and national security. Lots of people who make safety/risk-focused arguments to policymakers don't acknowledge this prior and as a consequence their arguments aren't listened to.

If you have access to decent compute, then you get to see the sorts of models that will be everywhere in 3-5 years, and this gives you a crazy information asymmetry advantage relative to everyone without a big computer.

China is deriving a real, strategic advantage by folding in large-scale surveillance+AI deals as part of 'One Belt One Road' investment schemes worldwide. China is innovating on architectures for political control.

AI may be one of the key ingredients to maintaining political stability in the future. If Xi is able to retain control in China, the surveillance capabilities of AI will partially be why. This has vast and dire implications for the world - countries copy what works.

The inequality and social discord in the West may ultimately prevent us from capturing many of the advantages AI could be giving to society - people are outright rejecting AI in many contexts due to the capitalist form of development.


Follow us on Twitter

to be informed of the latest developments and updates!


You can easily use to @tivitikothread bot for create more readable thread!
Donate 💲

You can keep this app free of charge by supporting 😊

for server charges...