Counterarguments to Benjamin Hoffman's criticism of OpenAI


#1

In April, Benjamin Hoffman published a post in which he criticizes OpenAI -
http://lesswrong.com/lw/oul/openai_makes_humanity_less_safe/.

What are the counterarguments to his post?


#2

The main argument seems to be

  • OpenAI and DeepMind are the only major players in AI research, and DeepMind having a competitor may create an “arms race” where safety concerns are de-prioritised to achieve goals faster.

  • Publishing of OpenAI research will provide boosts to other AI-wannabe organisations, and some of them may not be concerned with AI safety.

Counter arguments:

  • DeepMind and OpenAI are not the only organisations capable of this type of research. You can add Microsoft, FaceBook, Amazon, Apple, Boston Dynamics and any other IT organisation that has bought into AI as the next big thing. Adding OpenAI to this mix is a deliberate attempt at democratisation of the technology where an arms race is already in progress. This race is already leading to power disparity between large organisations that have resources to spare, and smaller groups which would otherwise struggle when it comes to this new technology.

  • The publishing is deliberate, and assumes that the technology being unleashed is not like “how to make a nuclear bomb” and more like “how to harness electricity”. The benefits are economic in that not all the profit and knowledge about the new technology will be held by a few large players. Even if we assume some aspects are more like the bomb (taking a precautionary approach), then smaller organisations will always have less computing power than larger ones for these projects, and will as a result be a few years behind the big players. It won’t be some small agency that makes dangerous public mistakes with self-upgrading technology. For instance, giving an AI ability to design and create its next generation improvement implies management of significant resources such as workshops capable of creating top-of-the-range silicon chips.

  • There is a legitimate concern that a “bomb like” AI might be possible to achieve using less resources, similar to worries that rogue states might create “dirty bombs” or low-yield nuclear devices with less resources. I’m not sure how to address that concern, but my gut feeling is that this argument focuses too much on deliberate weaponisation of AI, whilst the more credible threat that we can address is unsafe self-improving AI.

  • Conjecture: For every non-safety-conscious or actively malicious organisation that can use OpenAI research, the OpenAI project is just as likely to add groups who do take AI safety seriously, and it will equip them to pose the right questions and seek answers. Whilst leaving AI research purely to the “big players” without oversight (or oversight purely at the mercy of politicians and lobbyists) or anyone able to pose the right questions is more dangerous.