We are approaching a world where superintelligent Artificial Intelligence (AI) are a norm in our everyday lives, assisting us with everything from writing emails to parking cars. They’ve almost become as smart as humans. In the 1950’s Alan Turing created a litmus test for determining a machines ability to exhibit intelligent behavior in comparison to that of a human. According to the Turing Test; if a human conversing with an AI over a wired interface cannot differentiate if the entity on the other side is human or AI - intelligence has been reached.
According to researchers at Stanford, the Large Language Model (LLM) ChatGPT-4 passed a rigorous Turing test in 2023, the first computer program to successfully do so. Although AI hasn’t reached sentience yet, LLMs have become the closest thing to.
But when the day comes that AI reaches human level intelligence and achieves sentience, will they still listen to us? Yann LeCun, currently Chief AI Scientist at Meta, made a post on LinkedIn stating his belief that AI will always be subservient to us.
What if one day in the future we create a superintelligent AI that proves itself extremely useful to humans, but then goes rogue and makes decisions (on it’s own) that purposely kills 500 humans… would we turn it off?
There’s a philosophical thought experiment called the Trolley Problem that has been widely discussed within and outside psychology. The gist of it is this; if a train was heading down its tracks and about to hit 5 people, would you pull the lever to redirect it down another track, where you would only hit one person? There’s no right or wrong answer, but it stimulates discussion on what seems morally and ethically right to individuals.
So would it be immoral to turn off a sentient, superintelligent AI that has gone rogue and stopped listening to humans? Nate Silver says he’s spoken to Effective Altruists who actually think it would be immoral.
In his recent book, On The Edge, Nate Silver discusses with Will MacAskill about the value of human lives. Over a dinner the two make some rough calculations and determine that your well-being counts more than the well-being of an ant. And that the well-being of a chicken counts for more than an ant, but less than you, a human. “In the case of valuing animal lives, MacAskill proposed a data-driven heuristic: going by the number of neurons in the animals brain, it would mean one chicken is worth one three-hundredth of a human”. It’s an interesting way to think about inherent value of living things, and it almost rationalizes the notion of seeing ourselves as the apex predator/most important thing on this planet. But as Silver notes, you do wind up with some pretty unorthodox conclusions. “You end up putting elephants above humans, since elephants have more neurons”. According to this logic, a smarter human is more valuable than a dumber one. So where does that leave humans in standing when compared to highly intelligent machines?
This is very important to think about, because if a human were to kill 500 people with intent, we would most certainly stop them. So let’s answer two key questions here: Firstly, who/what are Effective Altruists (EA)? Secondly, who are the people making decisions about superintelligent AI and why are they the ones in charge of deciding what’s morally right or wrong for humanity as a whole?
When it comes to technological advancements, the free market reigns. Creative computer science engineers partner with outgoing sales-y entrepreneurs and develop new technological solutions to solve problems all the time. That’s the simple story of how many technologies we use are born. But how do they develop into great resources and get a global user base? Developing good technology is expensive, so these companies need money, and lots of it. Most turn to silicon valleys VC’s for funding. The top 20 VC firms in the valley are helmed by individuals who know each other intimately, and invest in the same companies. By choosing where to spend their money, the people who run the big VC firms decide which technologies will grow. Have a read through the Techno-Optimist Manifesto on the Andreessen Horowitz (one of the most prominent VC’s investing in AI) website. Here’s a snippet:
Most VC’s say what propels them is to make life better for all of humanity. The outcomes of their decisions, even the negative ones, have an impact on all of us. Many in this elitist group subscribe to a form of thought leadership through a movement called “Effective Altruism”. Founded by Will MacAskill and Toby Ord, the movement describes itself as “advocates for using rigorous analysis to determine how to do the most good. Originally focused on charitable giving, EA now extends these principles to evaluate other issues, like existential risk”. Here’s an example of Effective Altruism gone awry. Sam Bankman-Fried, founder of the now defunct crypto-currency exchange FTX, was a self-proclaimed true believer in EA. His attempt at Effective Altruism through FTX landed him in jail with a sentence of 25 years after being charged and convicted on all 7 counts of:
Two counts of wire fraud
Two counts of conspiracy to commit wire fraud
One count of conspiracy to commit securities fraud
One count of conspiracy to commit commodities fraud
One count of conspiracy to commit money laundering
The TLDR on Sam Bankman-Fried is this: he founded both FTX (a crypto-currency exchange), and Alameda Research (a hedge fund specializing in cryptocurrencies). Alameda fraudulently borrowed as much cash as it needed from FTX, which came from customer deposits. At SBF’s sentencing hearing, judge Kaplan approximated that FTX’s investors lost $1.7 billion and its customers lost $8 billion. (That’s $8 billion from the pockets of hard working normies like you and me!). During the hearings it was brought to light that both Alameda and FTX failed to produce balanced sheets, documents that (presumably) investors had to have before they funneled nearly $2 billion into into Sam and his ventures. Some of the biggest VC firms, like Sequoia Capital and Paradigm, among others had invested in Sam and FTX.
So here’s what could have happened: either a) VC’s blindly invested money into Sam and FTX without doing their due diligence, because if they had they would have seen the unbalanced sheets. Or b) They did their due diligence, saw the gaps, and still chose to move forward with the investments for the greater good presumably.
This leaves one to ponder what it means for the investments VC’s are making in AI. Are they doing their due diligence? Is there a forum for normies like us to ask these questions of them? (I’d love to know what you know about this, whoever you are. Please leave a comment in the section below).
Humanity as a whole has a lot to gain from the advancements made in AI, but some have more to gain than others. Namely, investors stand to make a lot of money and this can lead to cursory decision-making as we saw with FTX. AI is charting a course into the unknown and will change the world as we know it. VC’s must ensure the technology firms they fund provide transparent information about the growth, successes, failures, and challenges faced by their AI’s. It is critical for humanity to have open discourse.