AGI Risk

Experimental chatbot created to explain the dangers of AI

My attempt at a solution to AI Risk Skepticism, inspired by Dr. Roman Yampolskiy's AI Risk Skepticism

Chatbot





Chatbot



Taxonomy of Objections to AI Safety

1.1 Priority objection: AGI is Too Far so it isn't worth worrying about

1.2 Priority objection: A Soft Takeoff is more likely and so we will have Time to Prepare

1.3 Priority objection: There is No Obvious Path to Get to AGI from Current AI

1.4 Priority objection: Something Else is More Important than AI safety / alignment

1.5 Priority objection: Short Term AI Concerns are more important than AI safety


Arguments in this category typically grant the risk proposition, but think there are other priorities more important, agi is too far, etc. Thoughts:

  • We will only ever be too early or too late to safeguard exponential technologies.
  • It is either too soon or too late to start worrying. Isn’t prudence preferable?
  • It is clear: controlled ASI is more difficult to achieve than ASI.
  • AI is entangled in other issues: global warming, pandemics, etc. Intelligence has the ability to make these problems better or worse.
  • No one knows the speed of takeoff

2.1 Technical Objection: AI / AGI Doesn’t Exist, developments in AI are not necessarily progress towards AGI

2.2 Technical Objection: Superintelligence is Impossible

2.3 Technical Objection: Self-Improvement is Impossible

2.4 Technical Objection: AI Can’t be Conscious Proponents argue that in order to be dangerous AI has to be conscious

2.5 Technical Objection: AI Can just be a Tool

2.6 Technical Objection: We can Always just turn it off

2.7 Technical Objection: We can reprogram AIs if we don't like what they do

2.8 Technical Objection: AI Doesn't have a body so it can't hurt us

2.9 Technical Objection: If AI is as Capable as You Say, it Will not Make Dumb Mistakes

2.10 Technical Objection: Superintelligence Would (Probably) Not Be Catastrophic

2.11 Technical Objection: Self-preservation and Control Drives Don't Just Appear They Have to be Programmed In

2.12 Technical Objection: AI can't generate novel plans


(From Kaj Sotola - Disjunctive Scenarios of AI Risk) Core arguments for AI safety can often be reduced to:

  1. The capability claim: AI can become capable enough to potentially inflict major damage to human well-being.
  2. The value claim: AI may act according to values which are not aligned with those of humanity, and in doing so cause considerable harm
(From Bostrom - Superintelligence) Bostroms argument goes as:
  1. An AGI could become superintelligent
  2. Superintelligence would enable the AGI to take over the world
I believe technical objections can usually reduce to an objection to one of these.
  • On 2.1: OpenAI, Antropic, et al have an explicit goal of general intelligence.
  • On 2.2, 2.3: This argument presumes a hard limit to intelligence. Of course, so long as the limit is above humans, this is irrelevant
  • On 2.4: AI Risk is not predicated on AI systems experiencing qualia. See Alan Turing “Argument from Consciousness” in his seminal paper
  • On 2.6, 2.7, 2.8, 2.9, 2.10, 2.11: see instrumental convergence from pauseai. You cannot bring the coffee if you are turned off
  • Modern viruses are a subset of self reproducing AI’s. Given the challenges of deactivating them it is evident why turning off AI is not straightforward.

3.1 AI Safety Objections: AI Safety Can’t be Done Today

3.2 AI Safety Objections: AI Can’t be Safe

  • There are two known options: prudence or negligence.
  • On 3.1: There are many papers that disprove this. Yampolskiy, R.V., Artificial superintelligence: a futuristic approach. 2015: cRc Press.
  • On 3.2: The first step to failure is not trying

4.1 Ethical Objections: Superintelligence is Benevolence

4.2 Ethical Objections: Let the Smarter Beings Win

  • On 4.1: See Bostroms Orthogonality Thesis Armstrong, S., General purpose intelligence: arguing the orthogonality thesis. Analysis and Metaphysics, 2013(12): p. 68-84.
  • On 4.2: The vast majority of humanity is not on board with self destruction

5.1 Biased Objections: AI Safety Researchers are Non-Coders

5.2 Biased Objections: Majority of AI Researchers is not Worried

5.3 Biased Objections: Keep it Quiet

5.4 Biased Objections: Safety Work just Creates an Overhead Slowing Down Research

5.5 Biased Objections: Heads in the Sand

5.6 Biased Objections: If we don't do it, Someone else will

5.7 Biased Objections: AI Safety Requires Global Cooperation

  • On 5.1: Per Yampolskiy one doesn’t need to write code in order to understand the inherent risk of AGI, just like someone doesn’t have to work in a wet lab to understand dangers of pandemics from biological weapons.
  • On 5.2: Per Yampolskiy: Not only is this untrue, it is also irrelevant, even if 100% of mathematicians believed 2 + 2 = 5, it would still be wrong. Scientific facts are not determined by democratic process, and you don’t get to vote on reality or truth.
  • On 5.7: Catastrophic risks are present in more than one country. Global cooperation is required to address them. AI Safety experts generally grant this.

6.1 Miscellaneous Objection: So Easy it will be Solved Automatically

6.2 Miscellaneous Objection: AI Regulation Will Prevent Problems




Stephen Hawking

The development of full artificial intelligence could spell the end of the human race.

Stephen Hawking

Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.

AI could view us as a threat


This is an existential threat

Geoffrey Hinton, Turing Award Winner, widely considered the "Godfather of AI"
link

AI Has the potential to destroy civilization


An open letter calling to pause giant ai experiments more powerful than gpt-4 has been signed by many researchers steeped in the field



Resources:

Visit pauseai.info and futureoflife.org to contribute to AI Safety and AI Governance