May 25, 2023

Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not

Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity

By Tamlyn Hunt

Human face behind binary code

devrimb/Getty Images

“The idea that this stuff could actually get smarter than people.... I thought it was way off…. Obviously, I no longer think that,” Geoffrey Hinton, one of Google's top artificial intelligence scientists, also known as “ the godfather of AI ,” said after he quit his job in April so that he can warn about the dangers of this technology .

He’s not the only one worried. A 2023 survey of AI experts found that 36 percent fear that AI development may result in a “nuclear-level catastrophe.” Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.

As a researcher in consciousness, I share these strong concerns about the rapid development of AI, and I am a co-signer of the Future of Life open letter.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Why are we all so concerned? In short: AI development is going way too fast.

The key issue is the profoundly rapid improvement in conversing among the new crop of advanced "chatbots," or what are technically called “large language models” (LLMs). With this coming “AI explosion,” we will probably have just one chance to get this right.

If we get it wrong, we may not live to tell the tale. This is not hyperbole.

This rapid acceleration promises to soon result in “artificial general intelligence” (AGI), and when that happens, AI will be able to improve itself with no human intervention. It will do this in the same way that, for example, Google’s AlphaZero AI learned how to play chess better than even the very best human or other AI chess players in just nine hours from when it was first turned on. It achieved this feat by playing itself millions of times over.

A team of Microsoft researchers analyzing OpenAI’s GPT-4 , which I think is the best of the new advanced chatbots currently available, said it had, "sparks of advanced general intelligence" in a new preprint paper .

In testing GPT-4, it performed better than 90 percent of human test takers on the Uniform Bar Exam, a standardized test used to certify lawyers for practice in many states. That figure was up from just 10 percent in the previous GPT-3.5 version, which was trained on a smaller data set. They found similar improvements in dozens of other standardized tests.

Most of these tests are tests of reasoning. This is the main reason why Bubeck and his team concluded that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”

This pace of change is why Hinton told the New York Times : "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.” In a mid-May Senate hearing on the potential of AI, Sam Altman, the head of OpenAI called regulation “crucial.”

Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI (which by definition can surpass humans in a broad range of activities) will—and this is what I worry about the most—be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies.

This is known as the “control problem” or the “alignment problem” (see philosopher Nick Bostrom’s book Superintelligence for a good overview ) and has been studied and argued about by philosophers and scientists, such as Bostrom, Seth Baum and Eliezer Yudkowsky , for decades now.

I think of it this way: Why would we expect a newborn baby to beat a grandmaster in chess? We wouldn’t. Similarly, why would we expect to be able to control superintelligent AI systems? (No, we won’t be able to simply hit the off switch, because superintelligent AI will have thought of every possible way that we might do that and taken actions to prevent being shut off.)

Here’s another way of looking at it: a superintelligent AI will be able to do in about one second what it would take a team of 100 human software engineers a year or more to complete. Or pick any task, like designing a new advanced airplane or weapon system, and superintelligent AI could do this in about a second.

Once AI systems are built into robots, they will be able to act in the real world, rather than only the virtual (electronic) world, with the same degree of superintelligence, and will of course be able to replicate and improve themselves at a superhuman pace.

Any defenses or protections we attempt to build into these AI “gods,” on their way toward godhood, will be anticipated and neutralized with ease by the AI once it reaches superintelligence status. This is what it means to be superintelligent.

We won’t be able to control them because anything we think of, they will have already thought of, a million times faster than us. Any defenses we’ve built in will be undone, like Gulliver throwing off the tiny strands the Lilliputians used to try and restrain him.

Some argue that these LLMs are just automation machines with zero consciousness , the implication being that if they’re not conscious they have less chance of breaking free from their programming. Even if these language models, now or in the future, aren’t at all conscious, this doesn’t matter. For the record, I agree that it’s unlikely that they have any actual consciousness at this juncture—though I remain open to new facts as they come in.

Regardless, a nuclear bomb can kill millions without any consciousness whatsoever. In the same way, AI could kill millions with zero consciousness, in a myriad ways, including potentially use of nuclear bombs either directly (much less likely) or through manipulated human intermediaries (more likely).

So, the debates about consciousness and AI really don’t figure very much into the debates about AI safety.

Yes, language models based on GPT-4 and many other models are already circulating widely . But the moratorium being called for is to stop development of any new models more powerful than 4.0—and this can be enforced, with force if required. Training these more powerful models requires massive server farms and energy. They can be shut down.

My ethical compass tells me that it is very unwise to create these systems when we know already we won’t be able to control them, even in the relatively near future. Discernment is knowing when to pull back from the edge. Now is that time.

We should not open Pandora’s box any more than it already has been opened.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of  Scientific American.

will ai help the world or hurt it essay

Why AI Will Save the World

Marc Andreessen

  • Hacker News

The era of Artificial Intelligence is here, and boy are people freaking out.

Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it.

First, a short description of what AI is : The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other – it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology.

A shorter description of what AI isn’t : Killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything, like you see in the movies .

An even shorter description of what AI could be : A way to make everything we care about better.

Why AI Can Make Everything We Care About Better

The most validated core conclusion of social science across many decades and thousands of studies is that human intelligence makes a very broad range of life outcomes better. Smarter people have better outcomes in almost every domain of activity: academic achievement, job performance, occupational status, income, creativity, physical health, longevity, learning new skills, managing complex tasks, leadership, entrepreneurial success, conflict resolution, reading comprehension, financial decision making, understanding others’ perspectives, creative arts, parenting outcomes, and life satisfaction.

Further, human intelligence is the lever that we have used for millennia to create the world we live in today: science, technology, math, physics, chemistry, medicine, energy, construction, transportation, communication, art, music, culture, philosophy, ethics, morality. Without the application of intelligence on all these domains, we would all still be living in mud huts, scratching out a meager existence of subsistence farming. Instead we have used our intelligence to raise our standard of living on the order of 10,000X over the last 4,000 years.

What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here.

AI augmentation of human intelligence has already started – AI is already around us in the form of computer control systems of many kinds, is now rapidly escalating with AI Large Language Models like ChatGPT, and will accelerate very quickly from here – if we let it .

In our new era of AI:

  • Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.
  • Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.
  • Every scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement. Every artist, every engineer, every businessperson, every doctor, every caregiver will have the same in their worlds.
  • Every leader of people – CEO, government official, nonprofit president, athletic coach, teacher – will have the same. The magnification effects of better decisions by leaders across the people they lead are enormous, so this intelligence augmentation may be the most important of all.
  • Productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and resulting in a new era of heightened material prosperity across the planet.
  • Scientific breakthroughs and new technologies and medicines will dramatically expand, as AI helps us further decode the laws of nature and harvest them for our benefit.
  • The creative arts will enter a golden age, as AI-augmented artists, musicians, writers, and filmmakers gain the ability to realize their visions far faster and at greater scale than ever before.
  • I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.
  • In short, anything that people do with their natural intelligence today can be done much better with AI, and we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.
  • And this isn’t just about intelligence! Perhaps the most underestimated quality of AI is how humanizing it can be. AI art gives people who otherwise lack technical skills the freedom to create and share their artistic ideas . Talking to an empathetic AI friend really does improve their ability to handle adversity . And AI medical chatbots are already more empathetic than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer.

The stakes here are high. The opportunities are profound. AI is quite possibly the most important – and best – thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those.

The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future.

We should be living in a much better world with AI, and now we can.

In contrast to this positive view, the public conversation about AI is presently shot through with hysterical fear and paranoia.

We hear claims that AI will variously kill us all, ruin our society, take all our jobs, cause crippling inequality, and enable bad people to do awful things.

What explains this divergence in potential outcomes from near utopia to horrifying dystopia?

Historically, every new technology that matters, from electric lighting to automobiles to radio to the Internet, has sparked a moral panic – a social contagion that convinces people the new technology is going to destroy the world, or society, or both. The fine folks at Pessimists Archive have documented these technology-driven moral panics over the decades; their history makes the pattern vividly clear. It turns out this present panic is not even the first for AI .

Now, it is certainly the case that many new technologies have led to bad outcomes – often the same technologies that have been otherwise enormously beneficial to our welfare. So it’s not that the mere existence of a moral panic means there is nothing to be concerned about.

But a moral panic is by its very nature irrational – it takes what may be a legitimate concern and inflates it into a level of hysteria that ironically makes it harder to confront actually serious concerns.

And wow do we have a full-blown moral panic about AI right now.

This moral panic is already being used as a motivating force by a variety of actors to demand policy action – new AI restrictions, regulations, and laws. These actors, who are making extremely dramatic public statements about the dangers of AI – feeding on and further inflaming moral panic – all present themselves as selfless champions of the public good.

But are they?

And are they right or wrong?

Economists have observed a longstanding pattern in reform movements of this kind. The actors within movements like these fall into two categories – “Baptists” and “Bootleggers” – drawing on the historical example of the prohibition of alcohol in the United States in the 1920’s :

A cynic would suggest that some of the apparent Baptists are also Bootleggers – specifically the ones paid to attack AI by their universities , think tanks , activist groups , and media outlets . If you are paid a salary or receive grants to foster AI panic…you are probably a Bootlegger.

The problem with the Bootleggers is that they win . The Baptists are naive ideologues, the Bootleggers are cynical operators, and so the result of reform movements like these is often that the Bootleggers get what they want – regulatory capture, insulation from competition, the formation of a cartel – and the Baptists are left wondering where their drive for social improvement went so wrong.

We just lived through a stunning example of this – banking reform after the 2008 global financial crisis. The Baptists told us that we needed new laws and regulations to break up the “too big to fail” banks to prevent such a crisis from ever happening again. So Congress passed the Dodd-Frank Act of 2010, which was marketed as satisfying the Baptists’ goal, but in reality was coopted by the Bootleggers – the big banks. The result is that the same banks that were “too big to fail” in 2008 are much, much larger now.

So in practice, even when the Baptists are genuine – and even when the Baptists are right – they are used as cover by manipulative and venal Bootleggers to benefit themselves. 

And this is what is happening in the drive for AI regulation right now.

However, it isn’t sufficient to simply identify the actors and impugn their motives. We should consider the arguments of both the Baptists and the Bootleggers on their merits.

The first and original AI doomer risk is that AI will decide to literally kill humanity.

The fear that technology of our own creation will rise up and destroy us is deeply coded into our culture. The Greeks expressed this fear in the Prometheus Myth – Prometheus brought the destructive power of fire, and more generally technology (“techne”), to man, for which Prometheus was condemned to perpetual torture by the gods. Later, Mary Shelley gave us moderns our own version of this myth in her novel Frankenstein, or, The Modern Prometheus , in which we develop the technology for eternal life, which then rises up and seeks to destroy us. And of course, no AI panic newspaper story is complete without a still image of a gleaming red-eyed killer robot from James Cameron’s Terminator films.

The presumed evolutionary purpose of this mythology is to motivate us to seriously consider potential risks of new technologies – fire, after all, can indeed be used to burn down entire cities. But just as fire was also the foundation of modern civilization as used to keep us warm and safe in a cold and hostile world, this mythology ignores the far greater upside of most – all? – new technologies, and in practice inflames destructive emotion rather than reasoned analysis. Just because premodern man freaked out like this doesn’t mean we have to; we can apply rationality instead.

My view is that the idea that AI will decide to literally kill humanity is a profound category error . AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.

In short, AI doesn’t want , it doesn’t have goals , it doesn’t want to kill you , because it’s not alive . And AI is a machine – is not going to come alive any more than your toaster will.

Now, obviously, there are true believers in killer AI – Baptists – who are gaining a suddenly stratospheric amount of media coverage for their terrifying warnings, some of whom claim to have been studying the topic for decades and say they are now scared out of their minds by what they have learned. Some of these true believers are even actual innovators of the technology. These actors are arguing for a variety of bizarre and extreme restrictions on AI ranging from a ban on AI development , all the way up to military airstrikes on datacenters and nuclear war . They argue that because people like me cannot rule out future catastrophic consequences of AI, that we must assume a precautionary stance that may require large amounts of physical violence and death in order to prevent potential existential risk.

My response is that their position is non-scientific – What is the testable hypothesis? What would falsify the hypothesis? How do we know when we are getting into a danger zone? These questions go mainly unanswered apart from “You can’t prove it won’t happen!” In fact, these Baptists’ position is so non-scientific and so extreme – a conspiracy theory about math and code – and is already calling for physical violence, that I will do something I would normally not do and question their motives as well.

Specifically, I think three things are going on:

First, recall that John Von Neumann responded to Robert Oppenheimer’s famous hand-wringing about his role creating nuclear weapons – which helped end World War II and prevent World War III – with, “Some people confess guilt to claim credit for the sin.” What is the most dramatic way one can claim credit for the importance of one’s work without sounding overtly boastful? This explains the mismatch between the words and actions of the Baptists who are actually building and funding AI – watch their actions, not their words. (Truman was harsher after meeting with Oppenheimer: “Don’t let that crybaby in here again.” )

Second, some of the Baptists are actually Bootleggers. There is a whole profession of “AI safety expert”, “AI ethicist”, “AI risk researcher”. They are paid to be doomers, and their statements should be processed appropriately.

Third, California is justifiably famous for our many thousands of cults , from EST to the Peoples Temple, from Heaven’s Gate to the Manson Family. Many, although not all, of these cults are harmless, and maybe even serve a purpose for alienated people who find homes in them. But some are very dangerous indeed, and cults have a notoriously hard time straddling the line that ultimately leads to violence and death .

And the reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult , which has suddenly emerged into the daylight of global press attention and the public conversation. This cult has pulled in not just fringe characters, but also some actual industry experts and a not small number of wealthy donors – including, until recently, Sam Bankman-Fried . And it’s developed a full panoply of cult behaviors and beliefs.

This cult is why there are a set of AI risk doomers who sound so extreme – it’s not that they actually have secret knowledge that make their extremism logical, it’s that they’ve whipped themselves into a frenzy and really are…extremely extreme.

It turns out that this type of cult isn’t new – there is a longstanding Western tradition of millenarianism , which generates apocalypse cults. The AI risk cult has all the hallmarks of a millenarian apocalypse cult. From Wikipedia, with additions by me:

“Millenarianism is the belief by a group or movement [AI risk doomers] in a coming fundamental transformation of society [the arrival of AI], after which all things will be changed [AI utopia, dystopia, and/or end of the world]. Only dramatic events [AI bans, airstrikes on datacenters, nuclear strikes on unregulated AI] are seen as able to change the world [prevent AI] and the change is anticipated to be brought about, or survived, by a group of the devout and dedicated. In most millenarian scenarios, the disaster or battle to come [AI apocalypse, or its prevention] will be followed by a new, purified world [AI bans] in which the believers will be rewarded [or at least acknowledged to have been correct all along].”

This apocalypse cult pattern is so obvious that I am surprised more people don’t see it.

Don’t get me wrong, cults are fun to hear about, their written material is often creative and fascinating , and their members are engaging at dinner parties and on TV . But their extreme beliefs should not determine the future of laws and society – obviously not.

The second widely mooted AI risk is that AI will ruin our society, by generating outputs that will be so “harmful”, to use the nomenclature of this kind of doomer, as to cause profound damage to humanity, even if we’re not literally killed.

Short version: If the murder robots don’t get us, the hate speech and misinformation will.

This is a relatively recent doomer concern that branched off from and somewhat took over the “AI risk” movement that I described above. In fact, the terminology of AI risk recently changed from “AI safety” – the term used by people who are worried that AI would literally kill us – to “AI alignment” – the term used by people who are worried about societal “harms”. The original AI safety people are frustrated by this shift, although they don’t know how to put it back in the box – they now advocate that the actual AI risk topic be renamed “AI notkilleveryoneism”, which has not yet been widely adopted but is at least clear.

The tipoff to the nature of the AI societal risk claim is its own term, “AI alignment”. Alignment with what? Human values. Whose human values? Ah, that’s where things get tricky.

As it happens, I have had a front row seat to an analogous situation – the social media “trust and safety” wars. As is now obvious , social media services have been under massive pressure from governments and activists to ban, restrict, censor, and otherwise suppress a wide range of content for many years. And the same concerns of “hate speech” (and its mathematical counterpart, “algorithmic bias”) and “misinformation” are being directly transferred from the social media context to the new frontier of “AI alignment”. 

My big learnings from the social media wars are:

On the one hand, there is no absolutist free speech position. First, every country, including the United States, makes at least some content illegal . Second, there are certain kinds of content, like child pornography and incitements to real world violence, that are nearly universally agreed to be off limits – legal or not – by virtually every society. So any technological platform that facilitates or generates content – speech – is going to have some restrictions.

On the other hand, the slippery slope is not a fallacy, it’s an inevitability. Once a framework for restricting even egregiously terrible content is in place – for example, for hate speech, a specific hurtful word, or for misinformation, obviously false claims like “ the Pope is dead ” – a shockingly broad range of government agencies and activist pressure groups and nongovernmental entities will kick into gear and demand ever greater levels of censorship and suppression of whatever speech they view as threatening to society and/or their own personal preferences. They will do this up to and including in ways that are nakedly felony crimes . This cycle in practice can run apparently forever, with the enthusiastic support of authoritarian hall monitors installed throughout our elite power structures. This has been cascading for a decade in social media and with only certain exceptions continues to get more fervent all the time.

And so this is the dynamic that has formed around “AI alignment” now. Its proponents claim the wisdom to engineer AI-generated speech and thought that are good for society, and to ban AI-generated speech and thoughts that are bad for society. Its opponents claim that the thought police are breathtakingly arrogant and presumptuous – and often outright criminal, at least in the US – and in fact are seeking to become a new kind of fused government-corporate-academic authoritarian speech dictatorship ripped straight from the pages of George Orwell’s 1984 .

As the proponents of both “trust and safety” and “AI alignment” are clustered into the very narrow slice of the global population that characterizes the American coastal elites – which includes many of the people who work in and write about the tech industry – many of my readers will find yourselves primed to argue that dramatic restrictions on AI output are required to avoid destroying society. I will not attempt to talk you out of this now, I will simply state that this is the nature of the demand, and that most people in the world neither agree with your ideology nor want to see you win .

If you don’t agree with the prevailing niche morality that is being imposed on both social media and AI via ever-intensifying speech codes, you should also realize that the fight over what AI is allowed to say/generate will be even more important – by a lot – than the fight over social media censorship. AI is highly likely to be the control layer for everything in the world. How it is allowed to operate is going to matter perhaps more than anything else has ever mattered. You should be aware of how a small and isolated coterie of partisan social engineers are trying to determine that right now, under cover of the age-old claim that they are protecting you.

In short, don’t let the thought police suppress AI.

The fear of job loss due variously to mechanization, automation, computerization, or AI has been a recurring panic for hundreds of years, since the original onset of machinery such as the mechanical loom . Even though every new major technology has led to more jobs at higher wages throughout history, each wave of this panic is accompanied by claims that “this time is different” – this is the time it will finally happen, this is the technology that will finally deliver the hammer blow to human labor. And yet, it never happens. 

We’ve been through two such technology-driven unemployment panic cycles in our recent past – the outsourcing panic of the 2000’s, and the automation panic of the 2010’s. Notwithstanding many talking heads, pundits, and even tech industry executives pounding the table throughout both decades that mass unemployment was near, by late 2019 – right before the onset of COVID – the world had more jobs at higher wages than ever in history.

Nevertheless this mistaken idea will not die .

And sure enough, it’s back .

This time , we finally have the technology that’s going to take all the jobs and render human workers superfluous – real AI. Surely this time history won’t repeat, and AI will cause mass unemployment – and not rapid economic, job, and wage growth – right?

No, that’s not going to happen – and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth – the exact opposite of the fear. And here’s why.

The core mistake the automation-kills-jobs doomers keep making is called the Lump Of Labor Fallacy . This fallacy is the incorrect notion that there is a fixed amount of labor to be done in the economy at any given time, and either machines do it or people do it – and if machines do it, there will be no work for people to do.

The Lump Of Labor Fallacy flows naturally from naive intuition, but naive intuition here is wrong. When technology is applied to production, we get productivity growth – an increase in output generated by a reduction in inputs. The result is lower prices for goods and services. As prices for goods and services fall, we pay less for them, meaning that we now have extra spending power with which to buy other things . This increases demand in the economy, which drives the creation of new production – including new products and new industries – which then creates new jobs for the people who were replaced by machines in prior jobs. The result is a larger economy with higher material prosperity, more industries, more products, and more jobs.

But the good news doesn’t stop there. We also get higher wages. This is because, at the level of the individual worker, the marketplace sets compensation as a function of the marginal productivity of the worker . A worker in a technology-infused business will be more productive than a worker in a traditional business. The employer will either pay that worker more money as he is now more productive, or another employer will, purely out of self interest. The result is that technology introduced into an industry generally not only increases the number of jobs in the industry but also raises wages.

To summarize, technology empowers people to be more productive. This causes the prices for existing goods and services to fall, and for wages to rise. This in turn causes economic growth and job growth, while motivating the creation of new jobs and new industries. If a market economy is allowed to function normally and if technology is allowed to be introduced freely, this is a perpetual upward cycle that never ends. For, as Milton Friedman observed, “Human wants and needs are endless” – we always want more than we have. A technology-infused market economy is the way we get closer to delivering everything everyone could conceivably want, but never all the way there. And that is why technology doesn’t destroy jobs and never will.

These are such mindblowing ideas for people who have not been exposed to them that it may take you some time to wrap your head around them. But I swear I’m not making them up – in fact you can read all about them in standard economics textbooks. I recommend the chapter The Curse of Machinery in Henry Hazlitt’s Economics In One Lesson , and Frederic Bastiat’s satirical Candlemaker’s Petition to blot out the sun due to its unfair competition with the lighting industry, here modernized for our times .

But this time is different , you’re thinking. This time, with AI, we have the technology that can replace ALL human labor.

But, using the principles I described above, think of what it would mean for literally all existing human labor to be replaced by machines.

It would mean a takeoff rate of economic productivity growth that would be absolutely stratospheric, far beyond any historical precedent. Prices of existing goods and services would drop across the board to virtually zero. Consumer welfare would skyrocket. Consumer spending power would skyrocket. New demand in the economy would explode. Entrepreneurs would create dizzying arrays of new industries, products, and services, and employ as many people and AI as they could as fast as possible to meet all the new demand.

Suppose AI once again replaces that labor? The cycle would repeat, driving consumer welfare, economic growth, and job and wage growth even higher. It would be a straight spiral up to a material utopia that neither Adam Smith or Karl Marx ever dared dream of. 

We should be so lucky.

Speaking of Karl Marx, the concern about AI taking jobs segues directly into the next claimed AI risk, which is, OK, Marc, suppose AI does take all the jobs, either for bad or for good. Won’t that result in massive and crippling wealth inequality, as the owners of AI reap all the economic rewards and regular people get nothing?

As it happens, this was a central claim of Marxism, that the owners of the means of production – the bourgeoisie – would inevitably steal all societal wealth from the people who do the actual  work – the proletariat. This is another fallacy that simply will not die no matter how often it’s disproved by reality. But let’s drive a stake through its heart anyway.

The flaw in this theory is that, as the owner of a piece of technology, it’s not in your own interest to keep it to yourself – in fact the opposite, it’s in your own interest to sell it to as many customers as possible. The largest market in the world for any product is the entire world, all 8 billion of us. And so in reality, every new technology – even ones that start by selling to the rarefied air of high-paying big companies or wealthy consumers – rapidly proliferates until it’s in the hands of the largest possible mass market, ultimately everyone on the planet.

The classic example of this was Elon Musk’s so-called “secret plan” – which he naturally published openly – for Tesla in 2006:

Step 1, Build [expensive] sports car Step 2, Use that money to build an affordable car Step 3, Use that money to build an even more affordable car

…which is of course exactly what he’s done, becoming the richest man in the world as a result.

That last point is key. Would Elon be even richer if he only sold cars to rich people today? No. Would he be even richer than that if he only made cars for himself? Of course not. No, he maximizes his own profit by selling to the largest possible market, the world.

In short, everyone gets the thing – as we saw in the past with not just cars but also electricity, radio, computers, the Internet, mobile phones, and search engines. The makers of such technologies are highly motivated to drive down their prices until everyone on the planet can afford them. This is precisely what is already happening in AI – it’s why you can use state of the art generative AI not just at low cost but even for free today in the form of Microsoft Bing and Google Bard – and it is what will continue to happen. Not because such vendors are foolish or generous but precisely because they are greedy – they want to maximize the size of their market, which maximizes their profits.

So what happens is the opposite of technology driving centralization of wealth – individual customers of the technology, ultimately including everyone on the planet, are empowered instead, and capture most of the generated value . As with prior technologies, the companies that build AI – assuming they have to function in a free market – will compete furiously to make this happen.

Marx was wrong then, and he’s wrong now.

This is not to say that inequality is not an issue in our society. It is, it’s just not being driven by technology, it’s being driven by the reverse , by the sectors of the economy that are the most resistant to new technology, that have the most government intervention to prevent the adoption of new technology like AI – specifically housing, education, and health care. The actual risk of AI and inequality is not that AI will cause more inequality but rather that we will not allow AI to be used to reduce inequality .

AI Risk #5: Will AI Lead to Bad People Doing Bad Things?

So far I have explained why four of the five most often proposed risks of AI are not actually real – AI will not come to life and kill us, AI will not ruin our society, AI will not cause mass unemployment, and AI will not cause an ruinous increase in inequality. But now let’s address the fifth, the one I actually agree with: AI will make it easier for bad people to do bad things.

In some sense this is a tautology. Technology is a tool. Tools, starting with fire and rocks, can be used to do good things – cook food and build houses – and bad things – burn people and bludgeon people. Any technology can be used for good or bad. Fair enough. And AI will make it easier for criminals, terrorists, and hostile governments to do bad things, no question.

This causes some people to propose, well, in that case, let’s not take the risk, let’s ban AI now before this can happen . Unfortunately, AI is not some esoteric physical material that is hard to come by, like plutonium. It’s the opposite, it’s the easiest material in the world to come by – math and code.

The AI cat is obviously already out of the bag. You can learn how to build AI from thousands of free online courses, books, papers, and videos, and there are outstanding open source implementations proliferating by the day . AI is like air – it will be everywhere. The level of totalitarian oppression that would be required to arrest that would be so draconian – a world government monitoring and controlling all computers? jackbooted thugs in black helicopters seizing rogue GPUs? – that we would not have a society left to protect.

So instead, there are two very straightforward ways to address the risk of bad people doing bad things with AI, and these are precisely what we should focus on.

First, we have laws on the books to criminalize most of the bad things that anyone is going to do with AI. Hack into the Pentagon? That’s a crime. Steal money from a bank? That’s a crime. Create a bioweapon? That’s a crime. Commit a terrorist act? That’s a crime. We can simply focus on preventing those crimes when we can, and prosecuting them when we cannot. We don’t even need new laws – I’m not aware of a single actual bad use for AI that’s been proposed that’s not already illegal. And if a new bad use is identified, we ban that use. QED.

But you’ll notice what I slipped in there – I said we should focus first on preventing AI-assisted crimes before they happen – wouldn’t such prevention mean banning AI? Well, there’s another way to prevent such actions, and that’s by using AI as a defensive tool . The same capabilities that make AI dangerous in the hands of bad guys with bad goals make it powerful in the hands of good guys with good goals – specifically the good guys whose job it is to prevent bad things from happening.

For example, if you are worried about AI generating fake people and fake videos, the answer is to build new systems where people can verify themselves and real content via cryptographic signatures. Digital creation and alteration of both real and fake content was already here before AI; the answer is not to ban word processors and Photoshop – or AI – but to use technology to build a system that actually solves the problem.

And so, second, let’s mount major efforts to use AI for good, legitimate, defensive purposes. Let’s put AI to work in cyberdefense, in biological defense, in hunting terrorists, and in everything else that we do to keep ourselves, our communities, and our nation safe.

There are already many smart people in and out of government doing exactly this, of course – but if we apply all of the effort and brainpower that’s currently fixated on the futile prospect of banning AI to using AI to protect against bad people doing bad things, I think there’s no question a world infused with AI will be much safer than the world we live in today.

There is one final, and real, AI risk that is probably the scariest at all:

AI isn’t just being developed in the relatively free societies of the West, it is also being developed by the Communist Party of the People’s Republic of China.

China has a vastly different vision for AI than we do – they view it as a mechanism for authoritarian population control, full stop. They are not even being secretive about this, they are very clear about it , and they are already pursuing their agenda. And they do not intend to limit their AI strategy to China – they intend to proliferate it all across the world , everywhere they are powering 5G networks, everywhere they are loaning Belt And Road money, everywhere they are providing friendly consumer apps like Tiktok that serve as front ends to their centralized command and control AI.

The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.

I propose a simple strategy for what to do about this – in fact, the same strategy President Ronald Reagan used to win the first Cold War with the Soviet Union.

“We win, they lose.”

Rather than allowing ungrounded panics around killer AI, “harmful” AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can.

We should seek to win the race to global AI technological superiority and ensure that China does not.

In the process, we should drive AI into our economy and society as fast and hard as we possibly can, in order to maximize its gains for economic productivity and human potential.

This is the best way both to offset the real AI risks and to ensure that our way of life is not displaced by the much darker Chinese vision .

I propose a simple plan:

  • Big AI companies should be allowed to build AI as fast and aggressively as they can – but not allowed to achieve regulatory capture, not allowed to establish a government-protect cartel that is insulated from market competition due to incorrect claims of AI risk. This will maximize the technological and societal payoff from the amazing capabilities of these companies, which are jewels of modern capitalism.
  • Startup AI companies should be allowed to build AI as fast and aggressively as they can. They should neither confront government-granted protection of big companies, nor should they receive government assistance. They should simply be allowed to compete. If and as startups don’t succeed, their presence in the market will also continuously motivate big companies to be their best – our economies and societies win either way.
  • Open source AI should be allowed to freely proliferate and compete with both big AI companies and startups. There should be no regulatory barriers to open source whatsoever. Even when open source does not beat companies, its widespread availability is a boon to students all over the world who want to learn how to build and use AI to become part of the technological future, and will ensure that AI is available to everyone who can benefit from it no matter who they are or how much money they have.
  • To offset the risk of bad people doing bad things with AI, governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society’s defensive capabilities. This shouldn’t be limited to AI-enabled risks but also more general problems such as malnutrition, disease, and climate. AI can be an incredibly powerful tool for solving problems, and we should embrace it as such.
  • To prevent the risk of China achieving global AI dominance, we should use the full power of our private sector, our scientific establishment, and our governments in concert to drive American and Western AI to absolute global dominance, including ultimately inside China itself. We win, they lose.

And that is how we use AI to save the world.

It’s time to build.

Legends and Heroes

I close with two simple statements.

The development of AI started in the 1940’s, simultaneous with the invention of the computer . The first scientific paper on neural networks – the architecture of the AI we have today – was published in 1943 . Entire generations of AI scientists over the last 80 years were born, went to school, worked, and in many cases passed away without seeing the payoff that we are receiving now. They are legends, every one.

Today, growing legions of engineers – many of whom are young and may have had grandparents or even great-grandparents involved in the creation of the ideas behind AI – are working to make AI a reality, against a wall of fear-mongering and doomerism that is attempting to paint them as reckless villains. I do not believe they are reckless or villains. They are heroes, every one. My firm and I are thrilled to back as many of them as we can, and we will stand alongside them and their work 100%.

Want more a16z?

Sign up to get the best of a16z content, news, and investments.

Thanks for signing up for the a16z newsletter.

Check your inbox for a welcome note.

will ai help the world or hurt it essay

Marc Andreessen is a Cofounder and General Partner at the venture capital firm Andreessen Horowitz.

  • Game On: Marc Andreessen & Andrew Chen Talk Creative Computers Marc Andreessen and Andrew Chen
  • Politics & the Future of Tech with Marc Andreessen and Ben Horowitz Marc Andreessen and Ben Horowitz
  • Money, power, politics, and the internet’s next battleground Ben Horowitz, Marc Andreessen, Chris Dixon, and Robert Hackett
  • Fixing Higher Education & New Startup Opportunities with Marc and Ben Marc Andreessen and Ben Horowitz
  • Crisis in Higher Ed & Why Universities Still Matter with Marc & Ben Marc Andreessen and Ben Horowitz

The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.

This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/ .

Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.

  • AI Canon Derrick Harris , Matt Bornstein , and Guido Appenzeller Read More
  • AI is Industrializing Discovery Vijay Pande Read More
  • AI, Deep Learning, and Machine Learning: A Primer Frank Chen Read More

Oxford Martin School logo

Artificial intelligence is transforming our world — it is on all of us to make sure that it goes well

How ai gets built is currently decided by a small group of technologists. as this technology is transforming our lives, it should be in all of our interest to become informed and engaged..

Why should you care about the development of artificial intelligence?

Think about what the alternative would look like. If you and the wider public do not get informed and engaged, then we leave it to a few entrepreneurs and engineers to decide how this technology will transform our world.

That is the status quo. This small number of people at a few tech firms directly working on artificial intelligence (AI) do understand how extraordinarily powerful this technology is becoming . If the rest of society does not become engaged, then it will be this small elite who decides how this technology will change our lives.

To change this status quo, I want to answer three questions in this article: Why is it hard to take the prospect of a world transformed by AI seriously? How can we imagine such a world? And what is at stake as this technology becomes more powerful?

Why is it hard to take the prospect of a world transformed by artificial intelligence seriously?

In some way, it should be obvious how technology can fundamentally transform the world. We just have to look at how much the world has already changed. If you could invite a family of hunter-gatherers from 20,000 years ago on your next flight, they would be pretty surprised. Technology has changed our world already, so we should expect that it can happen again.

But while we have seen the world transform before, we have seen these transformations play out over the course of generations. What is different now is how very rapid these technological changes have become. In the past, the technologies that our ancestors used in their childhood were still central to their lives in their old age. This has not been the case anymore for recent generations. Instead, it has become common that technologies unimaginable in one's youth become ordinary in later life.

This is the first reason we might not take the prospect seriously: it is easy to underestimate the speed at which technology can change the world.

The second reason why it is difficult to take the possibility of transformative AI – potentially even AI as intelligent as humans – seriously is that it is an idea that we first heard in the cinema. It is not surprising that for many of us, the first reaction to a scenario in which machines have human-like capabilities is the same as if you had asked us to take seriously a future in which vampires, werewolves, or zombies roam the planet. 1

But, it is plausible that it is both the stuff of sci-fi fantasy and the central invention that could arrive in our, or our children’s, lifetimes.

The third reason why it is difficult to take this prospect seriously is by failing to see that powerful AI could lead to very large changes. This is also understandable. It is difficult to form an idea of a future that is very different from our own time. There are two concepts that I find helpful in imagining a very different future with artificial intelligence. Let’s look at both of them.

How to develop an idea of what the future of artificial intelligence might look like?

When thinking about the future of artificial intelligence, I find it helpful to consider two different concepts in particular: human-level AI, and transformative AI. 2 The first concept highlights the AI’s capabilities and anchors them to a familiar benchmark, while transformative AI emphasizes the impact that this technology would have on the world.

From where we are today, much of this may sound like science fiction. It is therefore worth keeping in mind that the majority of surveyed AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.

The advantages and disadvantages of comparing machine and human intelligence

One way to think about human-level artificial intelligence is to contrast it with the current state of AI technology. While today’s AI systems often have capabilities similar to a particular, limited part of the human mind, a human-level AI would be a machine that is capable of carrying out the same range of intellectual tasks that we humans are capable of. 3 It is a machine that would be “able to learn to do anything that a human can do,” as Norvig and Russell put it in their textbook on AI. 4

Taken together, the range of abilities that characterize intelligence gives humans the ability to solve problems and achieve a wide variety of goals. A human-level AI would therefore be a system that could solve all those problems that we humans can solve, and do the tasks that humans do today. Such a machine, or collective of machines, would be able to do the work of a translator, an accountant, an illustrator, a teacher, a therapist, a truck driver, or the work of a trader on the world’s financial markets. Like us, it would also be able to do research and science, and to develop new technologies based on that.

The concept of human-level AI has some clear advantages. Using the familiarity of our own intelligence as a reference provides us with some clear guidance on how to imagine the capabilities of this technology.

However, it also has clear disadvantages. Anchoring the imagination of future AI systems to the familiar reality of human intelligence carries the risk that it obscures the very real differences between them.

Some of these differences are obvious. For example, AI systems will have the immense memory of computer systems, against which our own capacity to store information pales. Another obvious difference is the speed at which a machine can absorb and process information. But information storage and processing speed are not the only differences. The domains in which machines already outperform humans is steadily increasing: in chess, after matching the level of the best human players in the late 90s, AI systems reached superhuman levels more than a decade ago. In other games like Go or complex strategy games, this has happened more recently. 5

These differences mean that an AI that is at least as good as humans in every domain would overall be much more powerful than the human mind. Even the first “human-level AI” would therefore be quite superhuman in many ways. 6

Human intelligence is also a bad metaphor for machine intelligence in other ways. The way we think is often very different from machines, and as a consequence the output of thinking machines can be very alien to us.

Most perplexing and most concerning are the strange and unexpected ways in which machine intelligence can fail. The AI-generated image of the horse below provides an example: on the one hand, AIs can do what no human can do – produce an image of anything, in any style (here photorealistic), in mere seconds – but on the other hand it can fail in ways that no human would fail. 7 No human would make the mistake of drawing a horse with five legs. 8

Imagining a powerful future AI as just another human would therefore likely be a mistake. The differences might be so large that it will be a misnomer to call such systems “human-level.”

AI-generated image of a horse 9

A brown horse running in a grassy field. The horse appears to have five legs.

Transformative artificial intelligence is defined by the impact this technology would have on the world

In contrast, the concept of transformative AI is not based on a comparison with human intelligence. This has the advantage of sidestepping the problems that the comparisons with our own mind bring. But it has the disadvantage that it is harder to imagine what such a system would look like and be capable of. It requires more from us. It requires us to imagine a world with intelligent actors that are potentially very different from ourselves.

Transformative AI is not defined by any specific capabilities, but by the real-world impact that the AI would have. To qualify as transformative, researchers think of it as AI that is “powerful enough to bring us into a new, qualitatively different future.” 10

In humanity’s history, there have been two cases of such major transformations, the agricultural and the industrial revolutions.

Transformative AI becoming a reality would be an event on that scale. Like the arrival of agriculture 10,000 years ago, or the transition from hand- to machine-manufacturing, it would be an event that would change the world for billions of people around the globe and for the entire trajectory of humanity’s future .

Technologies that fundamentally change how a wide range of goods or services are produced are called ‘general-purpose technologies’. The two previous transformative events were caused by the discovery of two particularly significant general-purpose technologies: the change in food production as humanity transitioned from hunting and gathering to farming, and the rise of machine manufacturing in the industrial revolution. Based on the evidence and arguments presented in this series on AI development, I believe it is plausible that powerful AI could represent the introduction of a similarly significant general-purpose technology.

Timeline of the three transformative events in world history

will ai help the world or hurt it essay

A future of human-level or transformative AI?

The two concepts are closely related, but they are not the same. The creation of a human-level AI would certainly have a transformative impact on our world. If the work of most humans could be carried out by an AI, the lives of millions of people would change. 11

The opposite, however, is not true: we might see transformative AI without developing human-level AI. Since the human mind is in many ways a poor metaphor for the intelligence of machines, we might plausibly develop transformative AI before we develop human-level AI. Depending on how this goes, this might mean that we will never see any machine intelligence for which human intelligence is a helpful comparison.

When and if AI systems might reach either of these levels is of course difficult to predict. In my companion article on this question, I give an overview of what researchers in this field currently believe. Many AI experts believe there is a real chance that such systems will be developed within the next decades, and some believe that they will exist much sooner.

What is at stake as artificial intelligence becomes more powerful?

All major technological innovations lead to a range of positive and negative consequences. For AI, the spectrum of possible outcomes – from the most negative to the most positive – is extraordinarily wide.

That the use of AI technology can cause harm is clear, because it is already happening.

AI systems can cause harm when people use them maliciously. For example, when they are used in politically-motivated disinformation campaigns or to enable mass surveillance. 12

But AI systems can also cause unintended harm, when they act differently than intended or fail. For example, in the Netherlands the authorities used an AI system which falsely claimed that an estimated 26,000 parents made fraudulent claims for child care benefits. The false allegations led to hardship for many poor families, and also resulted in the resignation of the Dutch government in 2021. 13

As AI becomes more powerful, the possible negative impacts could become much larger. Many of these risks have rightfully received public attention: more powerful AI could lead to mass labor displacement, or extreme concentrations of power and wealth. In the hands of autocrats, it could empower totalitarianism through its suitability for mass surveillance and control.

The so-called alignment problem of AI is another extreme risk. This is the concern that nobody would be able to control a powerful AI system, even if the AI takes actions that harm us humans, or humanity as a whole. This risk is unfortunately receiving little attention from the wider public, but it is seen as an extremely large risk by many leading AI researchers. 14

How could an AI possibly escape human control and end up harming humans?

The risk is not that an AI becomes self-aware, develops bad intentions, and “chooses” to do this. The risk is that we try to instruct the AI to pursue some specific goal – even a very worthwhile one – and in the pursuit of that goal it ends up harming humans. It is about unintended consequences. The AI does what we told it to do, but not what we wanted it to do.

Can’t we just tell the AI to not do those things? It is definitely possible to build an AI that avoids any particular problem we foresee, but it is hard to foresee all the possible harmful unintended consequences. The alignment problem arises because of “the impossibility of defining true human purposes correctly and completely,” as AI researcher Stuart Russell puts it. 15

Can’t we then just switch off the AI? This might also not be possible. That is because a powerful AI would know two things: it faces a risk that humans could turn it off, and it can’t achieve its goals once it has been turned off. As a consequence, the AI will pursue a very fundamental goal of ensuring that it won’t be switched off. This is why, once we realize that an extremely intelligent AI is causing unintended harm in the pursuit of some specific goal, it might not be possible to turn it off or change what the system does. 16

This risk – that humanity might not be able to stay in control once AI becomes very powerful, and that this might lead to an extreme catastrophe – has been recognized right from the early days of AI research more than 70 years ago. 17 The very rapid development of AI in recent years has made a solution to this problem much more urgent.

I have tried to summarize some of the risks of AI, but a short article is not enough space to address all possible questions. Especially on the very worst risks of AI systems, and what we can do now to reduce them, I recommend reading the book The Alignment Problem by Brian Christian and Benjamin Hilton’s article ‘Preventing an AI-related catastrophe’ .

If we manage to avoid these risks, transformative AI could also lead to very positive consequences. Advances in science and technology were crucial to the many positive developments in humanity’s history. If artificial ingenuity can augment our own, it could help us make progress on the many large problems we face: from cleaner energy, to the replacement of unpleasant work, to much better healthcare.

This extremely large contrast between the possible positives and negatives makes clear that the stakes are unusually high with this technology. Reducing the negative risks and solving the alignment problem could mean the difference between a healthy, flourishing, and wealthy future for humanity – and the destruction of the same.

How can we make sure that the development of AI goes well?

Making sure that the development of artificial intelligence goes well is not just one of the most crucial questions of our time, but likely one of the most crucial questions in human history. This needs public resources – public funding, public attention, and public engagement.

Currently, almost all resources that are dedicated to AI aim to speed up the development of this technology. Efforts that aim to increase the safety of AI systems, on the other hand, do not receive the resources they need. Researcher Toby Ord estimated that in 2020 between $10 to $50 million was spent on work to address the alignment problem. 18 Corporate AI investment in the same year was more than 2000-times larger, it summed up to $153 billion.

This is not only the case for the AI alignment problem. The work on the entire range of negative social consequences from AI is under-resourced compared to the large investments to increase the power and use of AI systems.

It is frustrating and concerning for society as a whole that AI safety work is extremely neglected and that little public funding is dedicated to this crucial field of research. On the other hand, for each individual person this neglect means that they have a good chance to actually make a positive difference, if they dedicate themselves to this problem now. And while the field of AI safety is small, it does provide good resources on what you can do concretely if you want to work on this problem.

I hope that more people dedicate their individual careers to this cause, but it needs more than individual efforts. A technology that is transforming our society needs to be a central interest of all of us. As a society we have to think more about the societal impact of AI, become knowledgeable about the technology, and understand what is at stake.

When our children look back at today, I imagine that they will find it difficult to understand how little attention and resources we dedicated to the development of safe AI. I hope that this changes in the coming years, and that we begin to dedicate more resources to making sure that powerful AI gets developed in a way that benefits us and the next generations.

If we fail to develop this broad-based understanding, then it will remain the small elite that finances and builds this technology that will determine how one of the – or plausibly the – most powerful technology in human history will transform our world.

If we leave the development of artificial intelligence entirely to private companies, then we are also leaving it up these private companies what our future — the future of humanity — will be.

With our work at Our World in Data we want to do our small part to enable a better informed public conversation on AI and the future we want to live in. You can find these resources on OurWorldinData.org/artificial-intelligence

Acknowledgements: I would like to thank my colleagues Daniel Bachler, Charlie Giattino, and Edouard Mathieu for their helpful comments to drafts of this essay.

This problem becomes even larger when we try to imagine how a future with a human-level AI might play out. Any particular scenario will not only involve the idea that this powerful AI exists, but a whole range of additional assumptions about the future context in which this happens. It is therefore hard to communicate a scenario of a world with human-level AI that does not sound contrived, bizarre or even silly.

Both of these concepts are widely used in the scientific literature on artificial intelligence. For example, questions about the timelines for the development of future AI are often framed using these terms. See my article on this topic .

The fact that humans are capable of a range of intellectual tasks means that you arrive at different definitions of intelligence depending on which aspect within that range you focus on (the Wikipedia entry on intelligence , for example, lists a number of definitions from various researchers and different disciplines). As a consequence there are also various definitions of ‘human-level AI’.

There are also several closely related terms: Artificial General Intelligence, High-Level Machine Intelligence, Strong AI, or Full AI are sometimes synonymously used, and sometimes defined in similar, yet different ways. In specific discussions, it is necessary to define this concept more narrowly; for example, in studies on AI timelines researchers offer more precise definitions of what human-level AI refers to in their particular study.

Peter Norvig and Stuart Russell (2021) — Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson.

The AI system AlphaGo , and its various successors, won against Go masters. The AI system Pluribus beat humans at no-limit Texas hold 'em poker. The AI system Cicero can strategize and use human language to win the strategy game Diplomacy. See: Meta Fundamental AI Research Diplomacy Team (FAIR), Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, et al. (2022) – ‘Human-Level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning’. In Science 0, no. 0 (22 November 2022): eade9097. https://doi.org/10.1126/science.ade9097 .

This also poses a problem when we evaluate how the intelligence of a machine compares with the intelligence of humans. If intelligence was a general ability, a single capacity, then we could easily compare and evaluate it, but the fact that it is a range of skills makes it much more difficult to compare across machine and human intelligence. Tests for AI systems are therefore comprising a wide range of tasks. See for example Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt (2020) –  Measuring Massive Multitask Language Understanding or the definition of what would qualify as artificial general intelligence in this Metaculus prediction .

An overview of how AI systems can fail can be found in Charles Choi – 7 Revealing Ways AIs Fail . It is also worth reading through the AIAAIC Repository which “details recent incidents and controversies driven by or relating to AI, algorithms, and automation."

I have taken this example from AI researcher François Chollet , who published it here .

Via François Chollet , who published it here . Based on Chollet’s comments it seems that this image was created by the AI system ‘Stable Diffusion’.

This quote is from Holden Karnofsky (2021) – AI Timelines: Where the Arguments, and the "Experts," Stand . For Holden Karnofsky’s earlier thinking on this conceptualization of AI see his 2016 article ‘Some Background on Our Views Regarding Advanced Artificial Intelligence’ .

Ajeya Cotra, whose research on AI timelines I discuss in other articles of this series, attempts to give a quantitative definition of what would qualify as transformative AI. in her widely cited report on AI timelines she defines it as a change in software technology that brings the growth rate of gross world product "to 20%-30% per year". Several other researchers define TAI in similar terms.

Human-level AI is typically defined as a software system that can carry out at least 90% or 99% of all economically relevant tasks that humans carry out. A lower-bar definition would be an AI system that can carry out all those tasks that can currently be done by another human who is working remotely on a computer.

On the use of AI in politically-motivated disinformation campaigns see for example John Villasenor (November 2020) – How to deal with AI-enabled disinformation . More generally on this topic see Brundage and Avin et al. (2018) – The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, published at maliciousaireport.com . A starting point for literature and reporting on mass surveillance by governments is the relevant Wikipedia entry .

See for example the Wikipedia entry on the ‘Dutch childcare benefits scandal’ and Melissa Heikkilä (2022) – ‘Dutch scandal serves as a warning for Europe over risks of using algorithms’ , in Politico. The technology can also reinforce discrimination in terms of race and gender. See Brian Christian’s book The Alignment Problem and the reports of the AI Now Institute .

Overviews are provided in Stuart Russell (2019) – Human Compatible (especially chapter 5) and Brian Christian’s 2020 book The Alignment Problem . Christian presents the thinking of many leading AI researchers from the earliest days up to now and presents an excellent overview of this problem. It is also seen as a large risk by some of the leading private firms who work towards powerful AI – see OpenAI's article " Our approach to alignment research " from August 2022.

Stuart Russell (2019) – Human Compatible

A question that follows from this is, why build such a powerful AI in the first place?

The incentives are very high. As I emphasize below, this innovation has the potential to lead to very positive developments. In addition to the large social benefits there are also large incentives for those who develop it – the governments that can use it for their goals, the individuals who can use it to become more powerful and wealthy. Additionally, it is of scientific interest and might help us to understand our own mind and intelligence better. And lastly, even if we wanted to stop building powerful AIs, it is likely very hard to actually achieve it. It is very hard to coordinate across the whole world and agree to stop building more advanced AI – countries around the world would have to agree and then find ways to actually implement it.

In 1950 the computer science pioneer Alan Turing put it like this: “If a machine can think, it might think more intelligently than we do, and then where should we be? … [T]his new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety. It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. … I cannot offer any such comfort, for I believe that no such bounds can be set.” Alan. M. Turing (1950) – Computing Machinery and Intelligence , In Mind, Volume LIX, Issue 236, October 1950, Pages 433–460.

Norbert Wiener is another pioneer who saw the alignment problem very early. One way he put it was “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” quoted from Norbert Wiener (1960) – Some Moral and Technical Consequences of Automation: As machines learn they may develop unforeseen strategies at rates that baffle their programmers. In Science.

In 1950 – the same year in which Turing published the cited article – Wiener published his book The Human Use of Human Beings, whose front-cover blurb reads: “The ‘mechanical brain’ and similar machines can destroy human values or enable us to realize them as never before.”

Toby Ord – The Precipice . He makes this projection in footnote 55 of chapter 2. It is based on the 2017 estimate by Farquhar.

Cite this work

Our articles and data visualizations rely on work from many different people and organizations. When citing this article, please also cite the underlying data sources. This article can be cited as:

BibTeX citation

Reuse this work freely

All visualizations, data, and code produced by Our World in Data are completely open access under the Creative Commons BY license . You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.

The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation, so you should always check the license of any such third-party data before use and redistribution.

All of our charts can be embedded in any site.

Our World in Data is free and accessible for everyone.

Help us do this work by making a donation.

Artificial Intelligence: The Helper or the Threat? Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

The principles of human intelligence have always been of certain interest for the field of science. Having understood the nature of processes that help people to reflect, scientists started proposing projects aimed at creating the machine that would be able to work like a human brain and make decisions as we do. Developing an artificial intelligence machine belongs to the number of the most urgent tasks of modern science. At the same time, there are different opinions on what our future will look like if we continue developing this field of science.

According to the people, who support an idea of artificial intelligence development, it will bring numerous benefits to the society and our everyday life. At first, the machine with artificial intelligence is going to be the best helper for the humanity in problem-solving (Cohen & Feigenbaum, 2014, p.13). Thus, there are tasks that require a good memory, and it is safer to assign such tasks to machines as their capacity of memory is by far more developed than one that people have. What is more, the machines with artificial intelligence help people to find the information that they need in moments. Such machines perform the record retrieval with help of numerous search algorithms and the human brain cannot do the same with such a high speed. To continue, the supporters of further artificial intelligence development believe that such machines will help us to compensate for certain features that make our brain activity and perception imperfect (Muller & Bostrom, 2016, p.554). If we look at artificial intelligence from this point of view, it acts as our teacher despite the fact that it is our creation. Importantly, people believe that artificial intelligence should be developed as it gives new opportunities to the humanity. Such a machine is able to teach itself without people’s help, and it also can take decisions even when circumstances are changing. Considering that, it can be trusted to fulfill many highly sensitive tasks.

Nevertheless, there are ones who are not so optimistic about the development and perfection of artificial intelligence. Their skeptical attitude about that is likely to be rooted in their concerns about the future of human society. To begin with, people who are skeptical about artificial intelligence believe that it is impossible to create the machine that will show the mental process similar to the one that people have. It means that the decisions made by such a machine will be based only on the logical connections between the objects. Considering that, it is not a good idea to use these machines for the tasks that involve people business. What is more, artificial intelligence development can store up future problems in the world of work (Ford, 2013, p. 37). There is no doubt that artificial intelligence programs do not have to be paid a salary every month. What is more, these programs usually do not make mistakes and it gives them an obvious advantage over human employees. With a glance to these facts, it is easy to suppose that they will be more likely to be chosen by employer. If artificial intelligence develops rapidly, many people will turn out to be unnecessary in their companies.

To conclude, artificial intelligence development is a problem that leaves nobody indifferent as it is closely associated with the future of the humanity. The thing that makes this question even trickier is the fact that both opinions on artificial intelligence seem to be well-founded.

Cohen, P. R., & Feigenbaum, E. A. (2014). The handbook of artificial intelligence. Los Altos, CA : Butterworth-Heinemann.

Ford, M. (2013). Could artificial intelligence create an unemployment crisis?. Communications of the ACM , 56 (7), 37-39.

Muller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 553-570). New York, NY: Springer International Publishing.

  • Technology Impact: 24 Hours Without My Cell Phone
  • Helping Profession: The Challenging Clients
  • Biological Basis of Asthma and Allergic Disease
  • Technologies: Microsoft's Cortana vs. Apple's Siri
  • Review: “Computers Learn to Listen, and Some Talk Back” by Lohr and Markoff
  • Technology Siri for Submission
  • Non Experts: Artificial Intelligence
  • Artificial Intelligence in Post Modern Development
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2020, August 26). Artificial Intelligence: The Helper or the Threat? https://ivypanda.com/essays/artificial-intelligence-the-helper-or-the-threat/

"Artificial Intelligence: The Helper or the Threat?" IvyPanda , 26 Aug. 2020, ivypanda.com/essays/artificial-intelligence-the-helper-or-the-threat/.

IvyPanda . (2020) 'Artificial Intelligence: The Helper or the Threat'. 26 August.

IvyPanda . 2020. "Artificial Intelligence: The Helper or the Threat?" August 26, 2020. https://ivypanda.com/essays/artificial-intelligence-the-helper-or-the-threat/.

1. IvyPanda . "Artificial Intelligence: The Helper or the Threat?" August 26, 2020. https://ivypanda.com/essays/artificial-intelligence-the-helper-or-the-threat/.

Bibliography

IvyPanda . "Artificial Intelligence: The Helper or the Threat?" August 26, 2020. https://ivypanda.com/essays/artificial-intelligence-the-helper-or-the-threat/.

  • Future Perfect

The case that AI threatens humanity, explained in 500 words

The short version of a big conversation about the dangers of emerging technology.

by Kelsey Piper

will ai help the world or hurt it essay

Tech superstars like Elon Musk , AI pioneers like Alan Turing , top computer scientists like Stuart Russell , and emerging-technologies researchers like Nick Bostrom have all said they think artificial intelligence will transform the world — and maybe annihilate it.

So: Should we be worried?

Here’s the argument for why we should: We’ve taught computers to multiply numbers, play chess , identify objects in a picture, transcribe human voices, and translate documents (though for the latter two, AI still is not as capable as an experienced human). All of these are examples of “narrow AI” — computer systems that are trained to perform at a human or superhuman level in one specific task.

We don’t yet have “general AI” — computer systems that can perform at a human or superhuman level across lots of different tasks.

Most experts think that general AI is possible, though they disagree on when we’ll get there . Computers today still don’t have as much power for computation as the human brain, and we haven’t yet explored all the possible techniques for training them. We continually discover ways we can extend our existing approaches to let computers do new, exciting, increasingly general things, like winning at open-ended war strategy games .

But even if general AI is a long way off, there’s a case that we should start preparing for it already. Current AI systems frequently exhibit unintended behavior. We’ve seen AIs that find shortcuts or even cheat rather than learn to play a game fairly, figure out ways to alter their score rather than earning points through play, and otherwise take steps we don’t expect — all to meet the goal their creators set.

As AI systems get more powerful, unintended behavior may become less charming and more dangerous. Experts have argued that powerful AI systems, whatever goals we give them, are likely to have certain predictable behavior patterns . They’ll try to accumulate more resources, which will help them achieve any goal. They’ll try to discourage us from shutting them off, since that’d make it impossible to achieve their goals. And they’ll try to keep their goals stable, which means it will be hard to edit or “tweak” them once they’re running. Even systems that don’t exhibit unintended behavior now are likely to do so when they have more resources available.

For all those reasons, many researchers have said AI is similar to launching a rocket . (Musk, with more of a flair for the dramatic, said it’s like summoning a demon .) The core idea is that once we have a general AI, we’ll have few options to steer it — so all the steering work needs to be done before the AI even exists, and it’s worth starting on today.

The skeptical perspective here is that general AI might be so distant that our work today won’t be applicable — but even the most forceful skeptics tend to agree that it’s worthwhile for some research to start early, so that when it’s needed, the groundwork is there.

  • The case for taking AI seriously as a threat to humanity

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

More in this stream

OpenAI insiders are demanding a “right to warn” the public 

OpenAI insiders are demanding a “right to warn” the public 

The double sexism of ChatGPT’s flirty “Her” voice

The double sexism of ChatGPT’s flirty “Her” voice

“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

Most popular, the supreme court just lit a match and tossed it into dozens of federal agencies, the supreme court just made a massive power grab it will come to regret, why the supreme court just ruled in favor of over 300 january 6 insurrectionists, the silver lining to biden’s debate disaster, hawk tuah girl, explained by straight dudes, today, explained.

Understand the world with a daily explainer plus the most compelling stories of the day.

More in Future Perfect

Two ways to go wrong in predicting AI

Two ways to go wrong in predicting AI

The most important Biden Cabinet member you don’t know

The most important Biden Cabinet member you don’t know

What if quitting your terrible job would help the economy?

What if quitting your terrible job would help the economy?

The best plan to help refugees might also be the simplest

The best plan to help refugees might also be the simplest

What nuclear annihilation could look like

What nuclear annihilation could look like

If it’s 100 degrees out, does your boss have to give you a break? Probably not.

If it’s 100 degrees out, does your boss have to give you a break? Probably not.

Two ways to go wrong in predicting AI

What, if anything, is AI search good for?

What Kenya’s deadly protests are really about

What Kenya’s deadly protests are really about

How the UFC explains the USA

How the UFC explains the USA  Audio

Kevin Costner, the American Western, and the American ego

Kevin Costner, the American Western, and the American ego

LBJ and Truman knew when to quit. Will Biden?

LBJ and Truman knew when to quit. Will Biden?

How Democrats got here

How Democrats got here

Advertisement

Supported by

How Could A.I. Destroy Humanity?

Researchers and industry leaders have warned that A.I. could pose an existential risk to humanity. But they’ve been light on the details.

  • Share full article

will ai help the world or hurt it essay

By Cade Metz

Cade Metz has spent years covering the realities and myths of A.I.

Last month, hundreds of well-known people in the world of artificial intelligence signed an open letter warning that A.I. could one day destroy humanity.

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” the one-sentence statement said.

The letter was the latest in a series of ominous warnings about A.I. that have been notably light on details. Today’s A.I. systems cannot destroy humanity. Some of them can barely add and subtract. So why are the people who know the most about A.I. so worried?

The scary scenario.

One day, the tech industry’s Cassandras say, companies, governments or independent researchers could deploy powerful A.I. systems to handle everything from business to warfare. Those systems could do things that we do not want them to do. And if humans tried to interfere or shut them down, they could resist or even replicate themselves so they could keep operating.

“Today’s systems are not anywhere close to posing an existential risk,” said Yoshua Bengio, a professor and A.I. researcher at the University of Montreal. “But in one, two, five years? There is too much uncertainty. That is the issue. We are not sure this won’t pass some point where things get catastrophic.”

The worriers have often used a simple metaphor. If you ask a machine to create as many paper clips as possible, they say, it could get carried away and transform everything — including humanity — into paper clip factories.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

How artificial intelligence is transforming the world

Subscribe to the center for technology innovation newsletter, darrell m. west and darrell m. west senior fellow - center for technology innovation , douglas dillon chair in governmental studies john r. allen john r. allen.

April 24, 2018

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents I. Qualities of artificial intelligence II. Applications in diverse sectors III. Policy, regulatory, and ethical issues IV. Recommendations V. Conclusion

  • 49 min read

Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. 1 A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values. 2

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21 st -century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity.

Qualities of artificial intelligence

Although there is no uniformly agreed upon definition, AI generally is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.” 3  According to researchers Shubhendu and Vijay, these software systems “make decisions which normally require [a] human level of expertise” and help people anticipate problems or deal with issues as they come up. 4 As such, they operate in an intentional, intelligent, and adaptive manner.

Intentionality

Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, they are capable of tremendous sophistication in analysis and decisionmaking.

Artificial intelligence is already altering the world and raising important questions for society, the economy, and governance.

Intelligence

AI generally is undertaken in conjunction with machine learning and data analytics. 5 Machine learning takes data and looks for underlying trends. If it spots something that is relevant for a practical problem, software designers can take that knowledge and use it to analyze specific issues. All that is required are data that are sufficiently robust that algorithms can discern useful patterns. Data can come in the form of digital information, satellite imagery, visual information, text, or unstructured data.

Adaptability

AI systems have the ability to learn and adapt as they make decisions. In the transportation area, for example, semi-autonomous vehicles have tools that let drivers and vehicles know about upcoming congestion, potholes, highway construction, or other possible traffic impediments. Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their achieved “experience” is immediately and fully transferable to other similarly configured vehicles. Their advanced algorithms, sensors, and cameras incorporate experience in current operations, and use dashboards and visual displays to present information in real time so human drivers are able to make sense of ongoing traffic and vehicular conditions. And in the case of fully autonomous vehicles, advanced systems can completely control the car or truck, and make all the navigational decisions.

Related Content

Jack Karsten, Darrell M. West

October 26, 2015

Makada Henry-Nickie

November 16, 2017

Sunil Johal, Daniel Araya

February 28, 2017

Applications in diverse sectors

AI is not a futuristic vision, but rather something that is here today and being integrated with and deployed into a variety of sectors. This includes fields such as finance, national security, health care, criminal justice, transportation, and smart cities. There are numerous examples where AI already is making an impact on the world and augmenting human capabilities in significant ways. 6

One of the reasons for the growing role of AI is the tremendous opportunities for economic development that it presents. A project undertaken by PriceWaterhouseCoopers estimated that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.” 7 That includes advances of $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion for Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is making rapid strides because it has set a national goal of investing $150 billion in AI and becoming the global leader in this area by 2030.

Meanwhile, a McKinsey Global Institute study of China found that “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.” 8 Although its authors found that China currently lags the United States and the United Kingdom in AI deployment, the sheer size of its AI market gives that country tremendous opportunities for pilot testing and future development.

Investments in financial AI in the United States tripled between 2013 and 2014 to a total of $12.2 billion. 9 According to observers in that sector, “Decisions about loans are now being made by software that can take into account a variety of finely parsed data about a borrower, rather than just a credit score and a background check.” 10 In addition, there are so-called robo-advisers that “create personalized investment portfolios, obviating the need for stockbrokers and financial advisers.” 11 These advances are designed to take the emotion out of investing and undertake decisions based on analytical considerations, and make these choices in a matter of minutes.

A prominent example of this is taking place in stock exchanges, where high-frequency trading by machines has replaced much of human decisionmaking. People submit buy and sell orders, and computers match them in the blink of an eye without human intervention. Machines can spot trading inefficiencies or market differentials on a very small scale and execute trades that make money according to investor instructions. 12 Powered in some places by advanced computing, these tools have much greater capacities for storing information because of their emphasis not on a zero or a one, but on “quantum bits” that can store multiple values in each location. 13 That dramatically increases storage capacity and decreases processing times.

Fraud detection represents another way AI is helpful in financial systems. It sometimes is difficult to discern fraudulent activities in large organizations, but AI can identify abnormalities, outliers, or deviant cases requiring additional investigation. That helps managers find problems early in the cycle, before they reach dangerous levels. 14

National security

AI plays a substantial role in national defense. Through its Project Maven, the American military is deploying AI “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.” 15 According to Deputy Secretary of Defense Patrick Shanahan, the goal of emerging technologies in this area is “to meet our warfighters’ needs and to increase [the] speed and agility [of] technology development and procurement.” 16

Artificial intelligence will accelerate the traditional process of warfare so rapidly that a new term has been coined: hyperwar.

The big data analytics associated with AI will profoundly affect intelligence analysis, as massive amounts of data are sifted in near real time—if not eventually in real time—thereby providing commanders and their staffs a level of intelligence analysis and productivity heretofore unseen. Command and control will similarly be affected as human commanders delegate certain routine, and in special circumstances, key decisions to AI platforms, reducing dramatically the time associated with the decision and subsequent action. In the end, warfare is a time competitive process, where the side able to decide the fastest and move most quickly to execution will generally prevail. Indeed, artificially intelligent intelligence systems, tied to AI-assisted command and control systems, can move decision support and decisionmaking to a speed vastly superior to the speeds of the traditional means of waging war. So fast will be this process, especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyperwar.

While the ethical and legal debate is raging over whether America will ever wage war with artificially intelligent autonomous lethal systems, the Chinese and Russians are not nearly so mired in this debate, and we should anticipate our need to defend against these systems operating at hyperwar speeds. The challenge in the West of where to position “humans in the loop” in a hyperwar scenario will ultimately dictate the West’s capacity to be competitive in this new form of conflict. 17

Just as AI will profoundly affect the speed of warfare, the proliferation of zero day or zero second cyber threats as well as polymorphic malware will challenge even the most sophisticated signature-based cyber protection. This forces significant improvement to existing cyber defenses. Increasingly, vulnerable systems are migrating, and will need to shift to a layered approach to cybersecurity with cloud-based, cognitive AI platforms. This approach moves the community toward a “thinking” defensive capability that can defend networks through constant training on known threats. This capability includes DNA-level analysis of heretofore unknown code, with the possibility of recognizing and stopping inbound malicious code by recognizing a string component of the file. This is how certain key U.S.-based systems stopped the debilitating “WannaCry” and “Petya” viruses.

Preparing for hyperwar and defending critical cyber networks must become a high priority because China, Russia, North Korea, and other countries are putting substantial resources into AI. In 2017, China’s State Council issued a plan for the country to “build a domestic industry worth almost $150 billion” by 2030. 18 As an example of the possibilities, the Chinese search firm Baidu has pioneered a facial recognition application that finds missing people. In addition, cities such as Shenzhen are providing up to $1 million to support AI labs. That country hopes AI will provide security, combat terrorism, and improve speech recognition programs. 19 The dual-use nature of many AI algorithms will mean AI research focused on one sector of society can be rapidly modified for use in the security sector as well. 20

Health care

AI tools are helping designers improve computational sophistication in health care. For example, Merantix is a German company that applies deep learning to medical issues. It has an application in medical imaging that “detects lymph nodes in the human body in Computer Tomography (CT) images.” 21 According to its developers, the key is labeling the nodes and identifying small lesions or growths that could be problematic. Humans can do this, but radiologists charge $100 per hour and may be able to carefully read only four images an hour. If there were 10,000 images, the cost of this process would be $250,000, which is prohibitively expensive if done by humans.

What deep learning can do in this situation is train computers on data sets to learn what a normal-looking versus an irregular-appearing lymph node is. After doing that through imaging exercises and honing the accuracy of the labeling, radiological imaging specialists can apply this knowledge to actual patients and determine the extent to which someone is at risk of cancerous lymph nodes. Since only a few are likely to test positive, it is a matter of identifying the unhealthy versus healthy node.

AI has been applied to congestive heart failure as well, an illness that afflicts 10 percent of senior citizens and costs $35 billion each year in the United States. AI tools are helpful because they “predict in advance potential challenges ahead and allocate resources to patient education, sensing, and proactive interventions that keep patients out of the hospital.” 22

Criminal justice

AI is being deployed in the criminal justice area. The city of Chicago has developed an AI-driven “Strategic Subject List” that analyzes people who have been arrested for their risk of becoming future perpetrators. It ranks 400,000 people on a scale of 0 to 500, using items such as age, criminal activity, victimization, drug arrest records, and gang affiliation. In looking at the data, analysts found that youth is a strong predictor of violence, being a shooting victim is associated with becoming a future perpetrator, gang affiliation has little predictive value, and drug arrests are not significantly associated with future criminal activity. 23

Judicial experts claim AI programs reduce human bias in law enforcement and leads to a fairer sentencing system. R Street Institute Associate Caleb Watney writes:

Empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates. 24

However, critics worry that AI algorithms represent “a secret system to punish citizens for crimes they haven’t yet committed. The risk scores have been used numerous times to guide large-scale roundups.” 25 The fear is that such tools target people of color unfairly and have not helped Chicago reduce the murder wave that has plagued it in recent years.

Despite these concerns, other countries are moving ahead with rapid deployment in this area. In China, for example, companies already have “considerable resources and access to voices, faces and other biometric data in vast quantities, which would help them develop their technologies.” 26 New technologies make it possible to match images and voices with other types of information, and to use AI on these combined data sets to improve law enforcement and national security. Through its “Sharp Eyes” program, Chinese law enforcement is matching video images, social media activity, online purchases, travel records, and personal identity into a “police cloud.” This integrated database enables authorities to keep track of criminals, potential law-breakers, and terrorists. 27 Put differently, China has become the world’s leading AI-powered surveillance state.

Transportation

Transportation represents an area where AI and machine learning are producing major innovations. Research by Cameron Kerry and Jack Karsten of the Brookings Institution has found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. Those investments include applications both for autonomous driving and the core technologies vital to that sector. 28

Autonomous vehicles—cars, trucks, buses, and drone delivery systems—use advanced technological capabilities. Those features include automated vehicle guidance and braking, lane-changing systems, the use of cameras and sensors for collision avoidance, the use of AI to analyze information in real time, and the use of high-performance computing and deep learning systems to adapt to new circumstances through detailed maps. 29

Light detection and ranging systems (LIDARs) and AI are key to navigation and collision avoidance. LIDAR systems combine light and radar instruments. They are mounted on the top of vehicles that use imaging in a 360-degree environment from a radar and light beams to measure the speed and distance of surrounding objects. Along with sensors placed on the front, sides, and back of the vehicle, these instruments provide information that keeps fast-moving cars and trucks in their own lane, helps them avoid other vehicles, applies brakes and steering when needed, and does so instantly so as to avoid accidents.

Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. This means that software is the key—not the physical car or truck itself.

Since these cameras and sensors compile a huge amount of information and need to process it instantly to avoid the car in the next lane, autonomous vehicles require high-performance computing, advanced algorithms, and deep learning systems to adapt to new scenarios. This means that software is the key, not the physical car or truck itself. 30 Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. 31

Ride-sharing companies are very interested in autonomous vehicles. They see advantages in terms of customer service and labor productivity. All of the major ride-sharing companies are exploring driverless cars. The surge of car-sharing and taxi services—such as Uber and Lyft in the United States, Daimler’s Mytaxi and Hailo service in Great Britain, and Didi Chuxing in China—demonstrate the opportunities of this transportation option. Uber recently signed an agreement to purchase 24,000 autonomous cars from Volvo for its ride-sharing service. 32

However, the ride-sharing firm suffered a setback in March 2018 when one of its autonomous vehicles in Arizona hit and killed a pedestrian. Uber and several auto manufacturers immediately suspended testing and launched investigations into what went wrong and how the fatality could have occurred. 33 Both industry and consumers want reassurance that the technology is safe and able to deliver on its stated promises. Unless there are persuasive answers, this accident could slow AI advancements in the transportation sector.

Smart cities

Metropolitan governments are using AI to improve urban service delivery. For example, according to Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson:

The Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends to the dispatcher an appropriate response to a medical emergency call—whether a patient can be treated on-site or needs to be taken to the hospital—by taking into account several factors, such as the type of call, location, weather, and similar calls. 34

Since it fields 80,000 requests each year, Cincinnati officials are deploying this technology to prioritize responses and determine the best ways to handle emergencies. They see AI as a way to deal with large volumes of data and figure out efficient ways of responding to public requests. Rather than address service issues in an ad hoc manner, authorities are trying to be proactive in how they provide urban services.

Cincinnati is not alone. A number of metropolitan areas are adopting smart city applications that use AI to improve service delivery, environmental planning, resource management, energy utilization, and crime prevention, among other things. For its smart cities index, the magazine Fast Company ranked American locales and found Seattle, Boston, San Francisco, Washington, D.C., and New York City as the top adopters. Seattle, for example, has embraced sustainability and is using AI to manage energy usage and resource management. Boston has launched a “City Hall To Go” that makes sure underserved communities receive needed public services. It also has deployed “cameras and inductive loops to manage traffic and acoustic sensors to identify gun shots.” San Francisco has certified 203 buildings as meeting LEED sustainability standards. 35

Through these and other means, metropolitan areas are leading the country in the deployment of AI solutions. Indeed, according to a National League of Cities report, 66 percent of American cities are investing in smart city technology. Among the top applications noted in the report are “smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement.” 36

Policy, regulatory, and ethical issues

These examples from a variety of sectors demonstrate how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decisionmaking within organizations, and improving efficiency and response times.

At the same time, though, these developments raise important policy, regulatory, and ethical issues. For example, how should we promote data access? How do we guard against biased or unfair data used in algorithms? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? What about questions of legal liability in cases where algorithms cause harm? 37

The increasing penetration of AI into many aspects of life is altering decisionmaking within organizations and improving efficiency. At the same time, though, these developments raise important policy, regulatory, and ethical issues.

Data access problems

The key to getting the most out of AI is having a “data-friendly ecosystem with unified standards and cross-platform sharing.” AI depends on data that can be analyzed in real time and brought to bear on concrete problems. Having data that are “accessible for exploration” in the research community is a prerequisite for successful AI development. 38

According to a McKinsey Global Institute study, nations that promote open data sources and data sharing are the ones most likely to see AI advances. In this regard, the United States has a substantial advantage over China. Global ratings on data openness show that U.S. ranks eighth overall in the world, compared to 93 for China. 39

But right now, the United States does not have a coherent national data strategy. There are few protocols for promoting research access or platforms that make it possible to gain new insights from proprietary data. It is not always clear who owns data or how much belongs in the public sphere. These uncertainties limit the innovation economy and act as a drag on academic research. In the following section, we outline ways to improve data access for researchers.

Biases in data and algorithms

In some instances, certain AI systems are thought to have enabled discriminatory or biased practices. 40 For example, Airbnb has been accused of having homeowners on its platform who discriminate against racial minorities. A research project undertaken by the Harvard Business School found that “Airbnb users with distinctly African American names were roughly 16 percent less likely to be accepted as guests than those with distinctly white names.” 41

Racial issues also come up with facial recognition software. Most such systems operate by comparing a person’s face to a range of faces in a large database. As pointed out by Joy Buolamwini of the Algorithmic Justice League, “If your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize.” 42 Unless the databases have access to diverse data, these programs perform poorly when attempting to recognize African-American or Asian-American features.

Many historical data sets reflect traditional values, which may or may not represent the preferences wanted in a current system. As Buolamwini notes, such an approach risks repeating inequities of the past:

The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated—what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create. 43

AI ethics and transparency

Algorithms embed ethical considerations and value choices into program decisions. As such, these systems raise questions concerning the criteria used in automated decisionmaking. Some people want to have a better understanding of how algorithms function and what choices are being made. 44

In the United States, many urban schools use algorithms for enrollment decisions based on a variety of considerations, such as parent preferences, neighborhood qualities, income level, and demographic background. According to Brookings researcher Jon Valant, the New Orleans–based Bricolage Academy “gives priority to economically disadvantaged applicants for up to 33 percent of available seats. In practice, though, most cities have opted for categories that prioritize siblings of current students, children of school employees, and families that live in school’s broad geographic area.” 45 Enrollment choices can be expected to be very different when considerations of this sort come into play.

Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like, or help screen or build rosters of individuals based on unfair criteria. The types of considerations that go into programming decisions matter a lot in terms of how the systems operate and how they affect customers. 46

For these reasons, the EU is implementing the General Data Protection Regulation (GDPR) in May 2018. The rules specify that people have “the right to opt out of personally tailored ads” and “can contest ‘legal or similarly significant’ decisions made by algorithms and appeal for human intervention” in the form of an explanation of how the algorithm generated a particular outcome. Each guideline is designed to ensure the protection of personal data and provide individuals with information on how the “black box” operates. 47

Legal liability

There are questions concerning the legal liability of AI systems. If there are harms or infractions (or fatalities in the case of driverless cars), the operators of the algorithm likely will fall under product liability rules. A body of case law has shown that the situation’s facts and circumstances determine liability and influence the kind of penalties that are imposed. Those can range from civil fines to imprisonment for major harms. 48 The Uber-related fatality in Arizona will be an important test case for legal liability. The state actively recruited Uber to test its autonomous vehicles and gave the company considerable latitude in terms of road testing. It remains to be seen if there will be lawsuits in this case and who is sued: the human backup driver, the state of Arizona, the Phoenix suburb where the accident took place, Uber, software developers, or the auto manufacturer. Given the multiple people and organizations involved in the road testing, there are many legal questions to be resolved.

In non-transportation areas, digital platforms often have limited liability for what happens on their sites. For example, in the case of Airbnb, the firm “requires that people agree to waive their right to sue, or to join in any class-action lawsuit or class-action arbitration, to use the service.” By demanding that its users sacrifice basic rights, the company limits consumer protections and therefore curtails the ability of people to fight discrimination arising from unfair algorithms. 49 But whether the principle of neutral networks holds up in many sectors is yet to be determined on a widespread basis.

Recommendations

In order to balance innovation with basic human values, we propose a number of recommendations for moving forward with AI. This includes improving data access, increasing government investment in AI, promoting AI workforce development, creating a federal advisory committee, engaging with state and local officials to ensure they enact effective policies, regulating broad objectives as opposed to specific algorithms, taking bias seriously as an AI issue, maintaining mechanisms for human control and oversight, and penalizing malicious behavior and promoting cybersecurity.

Improving data access

The United States should develop a data strategy that promotes innovation and consumer protection. Right now, there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design. AI requires data to test and improve its learning capacity. 50 Without structured and unstructured data sets, it will be nearly impossible to gain the full benefits of artificial intelligence.

In general, the research community needs better access to government and business data, although with appropriate safeguards to make sure researchers do not misuse data in the way Cambridge Analytica did with Facebook information. There is a variety of ways researchers could gain data access. One is through voluntary agreements with companies holding proprietary data. Facebook, for example, recently announced a partnership with Stanford economist Raj Chetty to use its social media data to explore inequality. 51 As part of the arrangement, researchers were required to undergo background checks and could only access data from secured sites in order to protect user privacy and security.

In the U.S., there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design.

Google long has made available search results in aggregated form for researchers and the general public. Through its “Trends” site, scholars can analyze topics such as interest in Trump, views about democracy, and perspectives on the overall economy. 52 That helps people track movements in public interest and identify topics that galvanize the general public.

Twitter makes much of its tweets available to researchers through application programming interfaces, commonly referred to as APIs. These tools help people outside the company build application software and make use of data from its social media platform. They can study patterns of social media communications and see how people are commenting on or reacting to current events.

In some sectors where there is a discernible public benefit, governments can facilitate collaboration by building infrastructure that shares data. For example, the National Cancer Institute has pioneered a data-sharing protocol where certified researchers can query health data it has using de-identified information drawn from clinical data, claims information, and drug therapies. That enables researchers to evaluate efficacy and effectiveness, and make recommendations regarding the best medical approaches, without compromising the privacy of individual patients.

There could be public-private data partnerships that combine government and business data sets to improve system performance. For example, cities could integrate information from ride-sharing services with its own material on social service locations, bus lines, mass transit, and highway congestion to improve transportation. That would help metropolitan areas deal with traffic tie-ups and assist in highway and mass transit planning.

Some combination of these approaches would improve data access for researchers, the government, and the business community, without impinging on personal privacy. As noted by Ian Buck, the vice president of NVIDIA, “Data is the fuel that drives the AI engine. The federal government has access to vast sources of information. Opening access to that data will help us get insights that will transform the U.S. economy.” 53 Through its Data.gov portal, the federal government already has put over 230,000 data sets into the public domain, and this has propelled innovation and aided improvements in AI and data analytic technologies. 54 The private sector also needs to facilitate research data access so that society can achieve the full benefits of artificial intelligence.

Increase government investment in AI

According to Greg Brockman, the co-founder of OpenAI, the U.S. federal government invests only $1.1 billion in non-classified AI technology. 55 That is far lower than the amount being spent by China or other leading nations in this area of research. That shortfall is noteworthy because the economic payoffs of AI are substantial. In order to boost economic development and social innovation, federal officials need to increase investment in artificial intelligence and data analytics. Higher investment is likely to pay for itself many times over in economic and social benefits. 56

Promote digital education and workforce development

As AI applications accelerate across many sectors, it is vital that we reimagine our educational institutions for a world where AI will be ubiquitous and students need a different kind of training than they currently receive. Right now, many students do not receive instruction in the kinds of skills that will be needed in an AI-dominated landscape. For example, there currently are shortages of data scientists, computer scientists, engineers, coders, and platform developers. These are skills that are in short supply; unless our educational system generates more people with these capabilities, it will limit AI development.

For these reasons, both state and federal governments have been investing in AI human capital. For example, in 2017, the National Science Foundation funded over 6,500 graduate students in computer-related fields and has launched several new initiatives designed to encourage data and computer science at all levels from pre-K to higher and continuing education. 57 The goal is to build a larger pipeline of AI and data analytic personnel so that the United States can reap the full advantages of the knowledge revolution.

But there also needs to be substantial changes in the process of learning itself. It is not just technical skills that are needed in an AI world but skills of critical reasoning, collaboration, design, visual display of information, and independent thinking, among others. AI will reconfigure how society and the economy operate, and there needs to be “big picture” thinking on what this will mean for ethics, governance, and societal impact. People will need the ability to think broadly about many questions and integrate knowledge from a number of different areas.

One example of new ways to prepare students for a digital future is IBM’s Teacher Advisor program, utilizing Watson’s free online tools to help teachers bring the latest knowledge into the classroom. They enable instructors to develop new lesson plans in STEM and non-STEM fields, find relevant instructional videos, and help students get the most out of the classroom. 58 As such, they are precursors of new educational environments that need to be created.

Create a federal AI advisory committee

Federal officials need to think about how they deal with artificial intelligence. As noted previously, there are many issues ranging from the need for improved data access to addressing issues of bias and discrimination. It is vital that these and other concerns be considered so we gain the full benefits of this emerging technology.

In order to move forward in this area, several members of Congress have introduced the “Future of Artificial Intelligence Act,” a bill designed to establish broad policy and legal principles for AI. It proposes the secretary of commerce create a federal advisory committee on the development and implementation of artificial intelligence. The legislation provides a mechanism for the federal government to get advice on ways to promote a “climate of investment and innovation to ensure the global competitiveness of the United States,” “optimize the development of artificial intelligence to address the potential growth, restructuring, or other changes in the United States workforce,” “support the unbiased development and application of artificial intelligence,” and “protect the privacy rights of individuals.” 59

Among the specific questions the committee is asked to address include the following: competitiveness, workforce impact, education, ethics training, data sharing, international cooperation, accountability, machine learning bias, rural impact, government efficiency, investment climate, job impact, bias, and consumer impact. The committee is directed to submit a report to Congress and the administration 540 days after enactment regarding any legislative or administrative action needed on AI.

This legislation is a step in the right direction, although the field is moving so rapidly that we would recommend shortening the reporting timeline from 540 days to 180 days. Waiting nearly two years for a committee report will certainly result in missed opportunities and a lack of action on important issues. Given rapid advances in the field, having a much quicker turnaround time on the committee analysis would be quite beneficial.

Engage with state and local officials

States and localities also are taking action on AI. For example, the New York City Council unanimously passed a bill that directed the mayor to form a taskforce that would “monitor the fairness and validity of algorithms used by municipal agencies.” 60 The city employs algorithms to “determine if a lower bail will be assigned to an indigent defendant, where firehouses are established, student placement for public schools, assessing teacher performance, identifying Medicaid fraud and determine where crime will happen next.” 61

According to the legislation’s developers, city officials want to know how these algorithms work and make sure there is sufficient AI transparency and accountability. In addition, there is concern regarding the fairness and biases of AI algorithms, so the taskforce has been directed to analyze these issues and make recommendations regarding future usage. It is scheduled to report back to the mayor on a range of AI policy, legal, and regulatory issues by late 2019.

Some observers already are worrying that the taskforce won’t go far enough in holding algorithms accountable. For example, Julia Powles of Cornell Tech and New York University argues that the bill originally required companies to make the AI source code available to the public for inspection, and that there be simulations of its decisionmaking using actual data. After criticism of those provisions, however, former Councilman James Vacca dropped the requirements in favor of a task force studying these issues. He and other city officials were concerned that publication of proprietary information on algorithms would slow innovation and make it difficult to find AI vendors who would work with the city. 62 It remains to be seen how this local task force will balance issues of innovation, privacy, and transparency.

Regulate broad objectives more than specific algorithms

The European Union has taken a restrictive stance on these issues of data collection and analysis. 63 It has rules limiting the ability of companies from collecting data on road conditions and mapping street views. Because many of these countries worry that people’s personal information in unencrypted Wi-Fi networks are swept up in overall data collection, the EU has fined technology firms, demanded copies of data, and placed limits on the material collected. 64 This has made it more difficult for technology companies operating there to develop the high-definition maps required for autonomous vehicles.

The GDPR being implemented in Europe place severe restrictions on the use of artificial intelligence and machine learning. According to published guidelines, “Regulations prohibit any automated decision that ‘significantly affects’ EU citizens. This includes techniques that evaluates a person’s ‘performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’” 65 In addition, these new rules give citizens the right to review how digital services made specific algorithmic choices affecting people.

By taking a restrictive stance on issues of data collection and analysis, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

If interpreted stringently, these rules will make it difficult for European software designers (and American designers who work with European counterparts) to incorporate artificial intelligence and high-definition mapping in autonomous vehicles. Central to navigation in these cars and trucks is tracking location and movements. Without high-definition maps containing geo-coded data and the deep learning that makes use of this information, fully autonomous driving will stagnate in Europe. Through this and other data protection actions, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

It makes more sense to think about the broad objectives desired in AI and enact policies that advance them, as opposed to governments trying to crack open the “black boxes” and see exactly how specific algorithms operate. Regulating individual algorithms will limit innovation and make it difficult for companies to make use of artificial intelligence.

Take biases seriously

Bias and discrimination are serious issues for AI. There already have been a number of cases of unfair treatment linked to historic data, and steps need to be undertaken to make sure that does not become prevalent in artificial intelligence. Existing statutes governing discrimination in the physical economy need to be extended to digital platforms. That will help protect consumers and build confidence in these systems as a whole.

For these advances to be widely adopted, more transparency is needed in how AI systems operate. Andrew Burt of Immuta argues, “The key problem confronting predictive analytics is really transparency. We’re in a world where data science operations are taking on increasingly important tasks, and the only thing holding them back is going to be how well the data scientists who train the models can explain what it is their models are doing.” 66

Maintaining mechanisms for human oversight and control

Some individuals have argued that there needs to be avenues for humans to exercise oversight and control of AI systems. For example, Allen Institute for Artificial Intelligence CEO Oren Etzioni argues there should be rules for regulating these systems. First, he says, AI must be governed by all the laws that already have been developed for human behavior, including regulations concerning “cyberbullying, stock manipulation or terrorist threats,” as well as “entrap[ping] people into committing crimes.” Second, he believes that these systems should disclose they are automated systems and not human beings. Third, he states, “An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.” 67 His rationale is that these tools store so much data that people have to be cognizant of the privacy risks posed by AI.

In the same vein, the IEEE Global Initiative has ethical guidelines for AI and autonomous systems. Its experts suggest that these models be programmed with consideration for widely accepted human norms and rules for behavior. AI algorithms need to take into effect the importance of these norms, how norm conflict can be resolved, and ways these systems can be transparent about norm resolution. Software designs should be programmed for “nondeception” and “honesty,” according to ethics experts. When failures occur, there must be mitigation mechanisms to deal with the consequences. In particular, AI must be sensitive to problems such as bias, discrimination, and fairness. 68

A group of machine learning experts claim it is possible to automate ethical decisionmaking. Using the trolley problem as a moral dilemma, they ask the following question: If an autonomous car goes out of control, should it be programmed to kill its own passengers or the pedestrians who are crossing the street? They devised a “voting-based system” that asked 1.3 million people to assess alternative scenarios, summarized the overall choices, and applied the overall perspective of these individuals to a range of vehicular possibilities. That allowed them to automate ethical decisionmaking in AI algorithms, taking public preferences into account. 69 This procedure, of course, does not reduce the tragedy involved in any kind of fatality, such as seen in the Uber case, but it provides a mechanism to help AI developers incorporate ethical considerations in their planning.

Penalize malicious behavior and promote cybersecurity

As with any emerging technology, it is important to discourage malicious treatment designed to trick software or use it for undesirable ends. 70 This is especially important given the dual-use aspects of AI, where the same tool can be used for beneficial or malicious purposes. The malevolent use of AI exposes individuals and organizations to unnecessary risks and undermines the virtues of the emerging technology. This includes behaviors such as hacking, manipulating algorithms, compromising privacy and confidentiality, or stealing identities. Efforts to hijack AI in order to solicit confidential information should be seriously penalized as a way to deter such actions. 71

In a rapidly changing world with many entities having advanced computing capabilities, there needs to be serious attention devoted to cybersecurity. Countries have to be careful to safeguard their own systems and keep other nations from damaging their security. 72 According to the U.S. Department of Homeland Security, a major American bank receives around 11 million calls a week at its service center. In order to protect its telephony from denial of service attacks, it uses a “machine learning-based policy engine [that] blocks more than 120,000 calls per month based on voice firewall policies including harassing callers, robocalls and potential fraudulent calls.” 73 This represents a way in which machine learning can help defend technology systems from malevolent attacks.

To summarize, the world is on the cusp of revolutionizing many sectors through artificial intelligence and data analytics. There already are significant deployments in finance, national security, health care, criminal justice, transportation, and smart cities that have altered decisionmaking, business models, risk mitigation, and system performance. These developments are generating substantial economic and social benefits.

The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole.

Yet the manner in which AI systems unfold has major implications for society as a whole. It matters how policy issues are addressed, ethical conflicts are reconciled, legal realities are resolved, and how much transparency is required in AI and data analytic solutions. 74 Human choices about software development affect the way in which decisions are made and the manner in which they are integrated into organizational routines. Exactly how these processes are executed need to be better understood because they will have substantial impact on the general public soon, and for the foreseeable future. AI may well be a revolution in human affairs, and become the single most influential human innovation in history.

Note: We appreciate the research assistance of Grace Gilberg, Jack Karsten, Hillary Schaub, and Kristjan Tomasson on this project.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Support for this publication was generously provided by Amazon. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment. 

John R. Allen is a member of the Board of Advisors of Amida Technology and on the Board of Directors of Spark Cognition. Both companies work in fields discussed in this piece.

  • Thomas Davenport, Jeff Loucks, and David Schatsky, “Bullish on the Business Value of Cognitive” (Deloitte, 2017), p. 3 (www2.deloitte.com/us/en/pages/deloitte-analytics/articles/cognitive-technology-adoption-survey.html).
  • Luke Dormehl, Thinking Machines: The Quest for Artificial Intelligence—and Where It’s Taking Us Next (New York: Penguin–TarcherPerigee, 2017).
  • Shubhendu and Vijay, “Applicability of Artificial Intelligence in Different Fields of Life.”
  • Andrew McAfee and Erik Brynjolfsson, Machine Platform Crowd: Harnessing Our Digital Future (New York: Norton, 2017).
  • Portions of this paper draw on Darrell M. West, The Future of Work: Robots, AI, and Automation , Brookings Institution Press, 2018.
  • PriceWaterhouseCoopers, “Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise?” 2017.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 1.
  • Nathaniel Popper, “Stocks and Bots,” New York Times Magazine , February 28, 2016.
  • Michael Lewis, Flash Boys: A Wall Street Revolt (New York: Norton, 2015).
  • Cade Metz, “In Quantum Computing Race, Yale Professors Battle Tech Giants,” New York Times , November 14, 2017, p. B3.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy,” December 2016, pp. 27-28.
  • Christian Davenport, “Future Wars May Depend as Much on Algorithms as on Ammunition, Report Says,” Washington Post , December 3, 2017.
  • John R. Allen and Amir Husain, “On Hyperwar,” Naval Institute Proceedings , July 17, 2017, pp. 30-36.
  • Paul Mozur, “China Sets Goal to Lead in Artificial Intelligence,” New York Times , July 21, 2017, p. B1.
  • Paul Mozur and John Markoff, “Is China Outsmarting American Artificial Intelligence?” New York Times , May 28, 2017.
  • Economist , “America v China: The Battle for Digital Supremacy,” March 15, 2018.
  • Rasmus Rothe, “Applying Deep Learning to Real-World Problems,” Medium , May 23, 2017.
  • Eric Horvitz, “Reflections on the Status and Future of Artificial Intelligence,” Testimony before the U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016, p. 5.
  • Jeff Asher and Rob Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago,” New York Times Upshot , June 13, 2017.
  • Caleb Watney, “It’s Time for our Justice System to Embrace Artificial Intelligence,” TechTank (blog), Brookings Institution, July 20, 2017.
  • Asher and Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago.”
  • Paul Mozur and Keith Bradsher, “China’s A.I. Advances Help Its Tech Industry, and State Security,” New York Times , December 3, 2017.
  • Simon Denyer, “China’s Watchful Eye,” Washington Post , January 7, 2018.
  • Cameron Kerry and Jack Karsten, “Gauging Investment in Self-Driving Cars,” Brookings Institution, October 16, 2017.
  • Portions of this section are drawn from Darrell M. West, “Driverless Cars in China, Europe, Japan, Korea, and the United States,” Brookings Institution, September 2016.
  • Yuming Ge, Xiaoman Liu, Libo Tang, and Darrell M. West, “Smart Transportation in China and the United States,” Center for Technology Innovation, Brookings Institution, December 2017.
  • Peter Holley, “Uber Signs Deal to Buy 24,000 Autonomous Vehicles from Volvo,” Washington Post , November 20, 2017.
  • Daisuke Wakabayashi, “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam,” New York Times , March 19, 2018.
  • Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson, “Learning from Public Sector Experimentation with Artificial Intelligence,” TechTank (blog), Brookings Institution, June 23, 2017.
  • Boyd Cohen, “The 10 Smartest Cities in North America,” Fast Company , November 14, 2013.
  • Teena Maddox, “66% of US Cities Are Investing in Smart City Technology,” TechRepublic , November 6, 2017.
  • Osonde Osoba and William Welser IV, “The Risks of Artificial Intelligence to Security and the Future of Work” (Santa Monica, Calif.: RAND Corp., December 2017) (www.rand.org/pubs/perspectives/PE237.html).
  • Ibid., p. 7.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 7.
  • Executive Office of the President, “Preparing for the Future of Artificial Intelligence,” October 2016, pp. 30-31.
  • Elaine Glusac, “As Airbnb Grows, So Do Claims of Discrimination,” New York Times , June 21, 2016.
  • “Joy Buolamwini,” Bloomberg Businessweek , July 3, 2017, p. 80.
  • Mark Purdy and Paul Daugherty, “Why Artificial Intelligence is the Future of Growth,” Accenture, 2016.
  • Jon Valant, “Integrating Charter Schools and Choice-Based Education Systems,” Brown Center Chalkboard blog, Brookings Institution, June 23, 2017.
  • Tucker, “‘A White Mask Worked Better.’”
  • Cliff Kuang, “Can A.I. Be Taught to Explain Itself?” New York Times Magazine , November 21, 2017.
  • Yale Law School Information Society Project, “Governing Machine Learning,” September 2017.
  • Katie Benner, “Airbnb Vows to Fight Racism, But Its Users Can’t Sue to Prompt Fairness,” New York Times , June 19, 2016.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence.”
  • Nancy Scolar, “Facebook’s Next Project: American Inequality,” Politico , February 19, 2018.
  • Darrell M. West, “What Internet Search Data Reveals about Donald Trump’s First Year in Office,” Brookings Institution policy report, January 17, 2018.
  • Ian Buck, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • Keith Nakasone, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Greg Brockman, “The Dawn of Artificial Intelligence,” Testimony before U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016.
  • Amir Khosrowshahi, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • James Kurose, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Stephen Noonoo, “Teachers Can Now Use IBM’s Watson to Search for Free Lesson Plans,” EdSurge , September 13, 2017.
  • Congress.gov, “H.R. 4625 FUTURE of Artificial Intelligence Act of 2017,” December 12, 2017.
  • Elizabeth Zima, “Could New York City’s AI Transparency Bill Be a Model for the Country?” Government Technology , January 4, 2018.
  • Julia Powles, “New York City’s Bold, Flawed Attempt to Make Algorithms Accountable,” New Yorker , December 20, 2017.
  • Sheera Frenkel, “Tech Giants Brace for Europe’s New Data Privacy Rules,” New York Times , January 28, 2018.
  • Claire Miller and Kevin O’Brien, “Germany’s Complicated Relationship with Google Street View,” New York Times , April 23, 2013.
  • Cade Metz, “Artificial Intelligence is Setting Up the Internet for a Huge Clash with Europe,” Wired , July 11, 2016.
  • Eric Siegel, “Predictive Analytics Interview Series: Andrew Burt,” Predictive Analytics Times , June 14, 2017.
  • Oren Etzioni, “How to Regulate Artificial Intelligence,” New York Times , September 1, 2017.
  • “Ethical Considerations in Artificial Intelligence and Autonomous Systems,” unpublished paper. IEEE Global Initiative, 2018.
  • Ritesh Noothigattu, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel Procaccia, “A Voting-Based System for Ethical Decision Making,” Computers and Society , September 20, 2017 (www.media.mit.edu/publications/a-voting-based-system-for-ethical-decision-making/).
  • Miles Brundage, et al., “The Malicious Use of Artificial Intelligence,” University of Oxford unpublished paper, February 2018.
  • John Markoff, “As Artificial Intelligence Evolves, So Does Its Criminal Potential,” New York Times, October 24, 2016, p. B3.
  • Economist , “The Challenger: Technopolitics,” March 17, 2018.
  • Douglas Maughan, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Levi Tillemann and Colin McCormick, “Roadmapping a U.S.-German Agenda for Artificial Intelligence Policy,” New American Foundation, March 2017.

Artificial Intelligence

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative

Brahima Sangafowa Coulibaly, Zia Qureshi

August 1, 2024

Niam Yaraghi, Azizi A. Seixas, Ferdinand Zizi

June 26, 2024

Tom Wheeler

June 24, 2024

More From Forbes

Artificial intelligence for good: how ai is helping humanity.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Founder and CEO,  Analytics Insight , providing organizations with strategic insights on disruptive technologies. 

Artificial intelligence (AI) is considered one of the most revolutionary developments in human history, and the world has already witnessed its transformative capabilities. Not surprisingly, AI-based innovations are powering some of the most cutting-edge solutions we use in our daily lives.

Today, AI empowers organizations, governments and communities to build a high-performing ecosystem to serve the entire world. Its profound impact on human lives is solving some of the most critical challenges faced by society. Here are a few innovations for social causes that I find most notable. 

Developing New Drugs: The healthcare industry is ripe with disruptive applications of AI, including the discovery and development of new drugs. AI and machine learning have been used to identify potential molecules by leveraging a large volume of data. Pharmaceutical companies use predictive analytics to discover these molecule candidates and optimize them with several rounds of iteration to select the best one for drug manufacturing.

Reporting Sexual Harassment: Artificial intelligence offers new ways of reporting gender-based violence, child sex abuse and more. AI programs are being designed to monitor internal communications , such as corporate documents, emails and chat, for inappropriate content. Various applications and platforms have been developed to help victims share their experiences of sexual harassment and abuse along with the time and location these events took place.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

Combatting Human Trafficking: Human trafficking is a serious crime against humanity and a threat to global security. Traffickers often use the internet to place advertisements to lure potential victims. Artificial intelligence tools and computer vision algorithms scrape images from different websites used by traffickers and label objects in images to search and review suspect advertisements. Additionally, these tools analyze data from the advertisements and websites to identify the potential victims of human trafficking and alert authorities before the crime.

Optimizing Renewable Energy Generation: Artificial intelligence in collaboration with other technologies, such as the Internet of Things (IoT), cloud computing and big data analytics, has significantly transformed the renewable energy sector. AI programs have the ability to combine weather data and sensors to optimize, predict and manage energy consumption across different sectors. AI-based accurate predictions result in increased dispatch efficiency and reducing the operating reserves needed.

Helping People With Disabilities: Artificial intelligence has also assisted people living independently with disabilities. Voice-assisted AI is one of the major breakthroughs , particularly for those who are visually impaired. It helps them communicate with others using smart devices and describe their surroundings. Tools like this can significantly help in overcoming daily obstacles for those with disabilities.

Investing In AI For Good

While the adoption of AI technologies is increasing, challenges remain. I’ve found that some of the major challenges faced by organizations developing AI solutions for social good include the fear of risk; defining how to measure the value the solution will bring; an incomplete understanding of AI; the high cost of technology; and regulatory, ethical and security concerns. However, organizations and institutions can overcome them by investing in advanced research, human capital and infrastructure, and encouraging AI literacy in society.

Organizations planning to invest in advanced AI research or implementing AI for social good must actively collaborate with research institutions and government bodies to apply their AI solutions for real-world impact. Moreover, workshops and forums are some of the best platforms for organizations to gather the insights they need. These platforms can be used to understand whether an organization’s solutions are the right fit to solve challenges for social good.

Bottom Line

Artificial intelligence has enormous potential to serve society, bringing more radical innovations for humans in the future. Its problem-solving ability could help people and communities around the world by solving today’s toughest challenges. With sensible use of AI, we should continue to see a wide scope of AI applications and new developments for social good.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Ashish Sukhadeve

  • Editorial Standards
  • Reprints & Permissions

The Future of AI: How Artificial Intelligence Will Change the World

AI is constantly changing our world. Here are just a few ways AI will influence our lives.

Mike Thomas

Innovations in the field of  artificial intelligence continue to shape the future of humanity across nearly every industry. AI is already the main driver of emerging technologies like big data, robotics and IoT, and  generative AI has further expanded the possibilities and popularity of AI. 

According to a 2023 IBM survey , 42 percent of enterprise-scale businesses integrated AI into their operations, and 40 percent are considering AI for their organizations. In addition, 38 percent of organizations have implemented generative AI into their workflows while 42 percent are considering doing so.

With so many changes coming at such a rapid pace, here’s what shifts in AI could mean for various industries and society at large.

More on the Future of AI Can AI Make Art More Human?

The Evolution of AI

AI has come a long way since 1951, when the  first documented success of an AI computer program was written by Christopher Strachey, whose checkers program completed a whole game on the Ferranti Mark I computer at the University of Manchester. Thanks to developments in machine learning and deep learning , IBM’s Deep Blue defeated chess grandmaster Garry Kasparov in 1997, and the company’s IBM Watson won Jeopardy! in 2011.  

Since then, generative AI has spearheaded the latest chapter in AI’s evolution, with OpenAI releasing its first GPT models in 2018. This has culminated in OpenAI developing its GPT-4 model and ChatGPT , leading to a proliferation of AI generators that can process queries to produce relevant text, audio, images and other types of content.   

AI has also been used to help  sequence RNA for vaccines and  model human speech , technologies that rely on model- and algorithm-based  machine learning and increasingly focus on perception, reasoning and generalization. 

How AI Will Impact the Future

Improved business automation .

About 55 percent of organizations have adopted AI to varying degrees, suggesting increased automation for many businesses in the near future. With the rise of chatbots and digital assistants, companies can rely on AI to handle simple conversations with customers and answer basic queries from employees.

AI’s ability to analyze massive amounts of data and convert its findings into convenient visual formats can also accelerate the decision-making process . Company leaders don’t have to spend time parsing through the data themselves, instead using instant insights to make informed decisions .

“If [developers] understand what the technology is capable of and they understand the domain very well, they start to make connections and say, ‘Maybe this is an AI problem, maybe that’s an AI problem,’” said Mike Mendelson, a learner experience designer for NVIDIA . “That’s more often the case than, ‘I have a specific problem I want to solve.’”

More on AI 75 Artificial Intelligence (AI) Companies to Know

Job Disruption

Business automation has naturally led to fears over job losses . In fact, employees believe almost one-third of their tasks could be performed by AI. Although AI has made gains in the workplace, it’s had an unequal impact on different industries and professions. For example, manual jobs like secretaries are at risk of being automated, but the demand for other jobs like machine learning specialists and information security analysts has risen.

Workers in more skilled or creative positions are more likely to have their jobs augmented by AI , rather than be replaced. Whether forcing employees to learn new tools or taking over their roles, AI is set to spur upskilling efforts at both the individual and company level .     

“One of the absolute prerequisites for AI to be successful in many [areas] is that we invest tremendously in education to retrain people for new jobs,” said Klara Nahrstedt, a computer science professor at the University of Illinois at Urbana–Champaign and director of the school’s Coordinated Science Laboratory.

Data Privacy Issues

Companies require large volumes of data to train the models that power generative AI tools, and this process has come under intense scrutiny. Concerns over companies collecting consumers’ personal data have led the FTC to open an investigation into whether OpenAI has negatively impacted consumers through its data collection methods after the company potentially violated European data protection laws . 

In response, the Biden-Harris administration developed an AI Bill of Rights that lists data privacy as one of its core principles. Although this legislation doesn’t carry much legal weight, it reflects the growing push to prioritize data privacy and compel AI companies to be more transparent and cautious about how they compile training data.      

Increased Regulation

AI could shift the perspective on certain legal questions, depending on how generative AI lawsuits unfold in 2024. For example, the issue of intellectual property has come to the forefront in light of copyright lawsuits filed against OpenAI by writers, musicians and companies like The New York Times . These lawsuits affect how the U.S. legal system interprets what is private and public property, and a loss could spell major setbacks for OpenAI and its competitors. 

Ethical issues that have surfaced in connection to generative AI have placed more pressure on the U.S. government to take a stronger stance. The Biden-Harris administration has maintained its moderate position with its latest executive order , creating rough guidelines around data privacy, civil liberties, responsible AI and other aspects of AI. However, the government could lean toward stricter regulations, depending on  changes in the political climate .  

Climate Change Concerns

On a far grander scale, AI is poised to have a major effect on sustainability, climate change and environmental issues. Optimists can view AI as a way to make supply chains more efficient, carrying out predictive maintenance and other procedures to reduce carbon emissions . 

At the same time, AI could be seen as a key culprit in climate change . The energy and resources required to create and maintain AI models could raise carbon emissions by as much as 80 percent, dealing a devastating blow to any sustainability efforts within tech. Even if AI is applied to climate-conscious technology , the costs of building and training models could leave society in a worse environmental situation than before.   

What Industries Will AI Impact the Most?  

There’s virtually no major industry that modern AI hasn’t already affected. Here are a few of the industries undergoing the greatest changes as a result of AI.  

AI in Manufacturing

Manufacturing has been benefiting from AI for years. With AI-enabled robotic arms and other manufacturing bots dating back to the 1960s and 1970s, the industry has adapted well to the powers of AI. These  industrial robots typically work alongside humans to perform a limited range of tasks like assembly and stacking, and predictive analysis sensors keep equipment running smoothly. 

AI in Healthcare

It may seem unlikely, but  AI healthcare is already changing the way humans interact with medical providers. Thanks to its  big data analysis capabilities, AI helps identify diseases more quickly and accurately, speed up and streamline drug discovery and even monitor patients through virtual nursing assistants. 

AI in Finance

Banks, insurers and financial institutions leverage AI for a range of applications like detecting fraud, conducting audits and evaluating customers for loans. Traders have also used machine learning’s ability to assess millions of data points at once, so they can quickly gauge risk and make smart investing decisions . 

AI in Education

AI in education will change the way humans of all ages learn. AI’s use of machine learning,  natural language processing and  facial recognition help digitize textbooks, detect plagiarism and gauge the emotions of students to help determine who’s struggling or bored. Both presently and in the future, AI tailors the experience of learning to student’s individual needs.

AI in Media

Journalism is harnessing AI too, and will continue to benefit from it. One example can be seen in The Associated Press’ use of  Automated Insights , which produces thousands of earning reports stories per year. But as generative  AI writing tools , such as ChatGPT, enter the market,  questions about their use in journalism abound.

AI in Customer Service

Most people dread getting a  robocall , but  AI in customer service can provide the industry with data-driven tools that bring meaningful insights to both the customer and the provider. AI tools powering the customer service industry come in the form of  chatbots and  virtual assistants .

AI in Transportation

Transportation is one industry that is certainly teed up to be drastically changed by AI.  Self-driving cars and  AI travel planners are just a couple of facets of how we get from point A to point B that will be influenced by AI. Even though autonomous vehicles are far from perfect, they will one day ferry us from place to place.

Risks and Dangers of AI

Despite reshaping numerous industries in positive ways, AI still has flaws that leave room for concern. Here are a few potential risks of artificial intelligence.  

Job Losses 

Between 2023 and 2028, 44 percent of workers’ skills will be disrupted . Not all workers will be affected equally — women are more likely than men to be exposed to AI in their jobs. Combine this with the fact that there is a gaping AI skills gap between men and women, and women seem much more susceptible to losing their jobs. If companies don’t have steps in place to upskill their workforces, the proliferation of AI could result in higher unemployment and decreased opportunities for those of marginalized backgrounds to break into tech.

Human Biases 

The reputation of AI has been tainted with a habit of reflecting the biases of the people who train the algorithmic models. For example, facial recognition technology has been known to favor lighter-skinned individuals , discriminating against people of color with darker complexions. If researchers aren’t careful in  rooting out these biases early on, AI tools could reinforce these biases in the minds of users and perpetuate social inequalities.

Deepfakes and Misinformation

The spread of deepfakes threatens to blur the lines between fiction and reality, leading the general public to  question what’s real and what isn’t. And if people are unable to identify deepfakes, the impact of  misinformation could be dangerous to individuals and entire countries alike. Deepfakes have been used to promote political propaganda, commit financial fraud and place students in compromising positions, among other use cases. 

Data Privacy

Training AI models on public data increases the chances of data security breaches that could expose consumers’ personal information. Companies contribute to these risks by adding their own data as well. A  2024 Cisco survey found that 48 percent of businesses have entered non-public company information into  generative AI tools and 69 percent are worried these tools could damage their intellectual property and legal rights. A single breach could expose the information of millions of consumers and leave organizations vulnerable as a result.  

Automated Weapons

The use of AI in automated weapons poses a major threat to countries and their general populations. While automated weapons systems are already deadly, they also fail to discriminate between soldiers and civilians . Letting artificial intelligence fall into the wrong hands could lead to irresponsible use and the deployment of weapons that put larger groups of people at risk.  

Superior Intelligence

Nightmare scenarios depict what’s known as the technological singularity , where superintelligent machines take over and permanently alter human existence through enslavement or eradication. Even if AI systems never reach this level, they can become more complex to the point where it’s difficult to determine how AI makes decisions at times. This can lead to a lack of transparency around how to fix algorithms when mistakes or unintended behaviors occur. 

“I don’t think the methods we use currently in these areas will lead to machines that decide to kill us,” said Marc Gyongyosi, founder of  Onetrack.AI . “I think that maybe five or 10 years from now, I’ll have to reevaluate that statement because we’ll have different methods available and different ways to go about these things.”

Frequently Asked Questions

What does the future of ai look like.

AI is expected to improve industries like healthcare, manufacturing and customer service, leading to higher-quality experiences for both workers and customers. However, it does face challenges like increased regulation, data privacy concerns and worries over job losses.

What will AI look like in 10 years?

AI is on pace to become a more integral part of people’s everyday lives. The technology could be used to provide elderly care and help out in the home. In addition, workers could collaborate with AI in different settings to enhance the efficiency and safety of workplaces.

Is AI a threat to humanity?

It depends on how people in control of AI decide to use the technology. If it falls into the wrong hands, AI could be used to expose people’s personal information, spread misinformation and perpetuate social inequalities, among other malicious use cases.

Recent Software Engineering Perspectives Articles

10 Companies Hiring AI Engineers

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

For inquiries related to this message please contact our support team and provide the reference ID below.

A tree with exposed roots growing from an eroded cliffside, with dense foliage in the background.

Photo by Ronny Navarro/Unsplash

Frontier AI ethics

Generative agents will change our society in weird, wonderful and worrying ways. can philosophy help us get a grip on them.

by Seth Lazar   + BIO

Around a year ago, generative AI took the world by storm, as extraordinarily powerful large language models (LLMs) enabled unprecedented performance at a wider range of tasks than ever before feasible. Though best known for generating convincing text and images, LLMs like OpenAI’s GPT-4 and Google’s Gemini are likely to have greater social impacts as the executive centre for complex systems that integrate additional tools for both learning about the world and acting on it. These generative agents will power companions that introduce new categories of social relationship, and change old ones. They may well radically change the attention economy. And they will revolutionise personal computing, enabling everyone to control digital technologies with language alone.

Much of the attention being paid to generative AI systems has focused on how they replicate the pathologies of already widely deployed AI systems, arguing that they centralise power and wealth, ignore copyright protections, depend on exploitative labour practices, and use excessive resources. Other critics highlight how they foreshadow vastly more powerful future systems that might threaten humanity’s survival. The first group says there is nothing new here; the other looks through the present to a perhaps distant horizon.

I want instead to pay attention to what makes these particular systems distinctive: both their remarkable scientific achievement, and the most likely and consequential ways in which they will change society over the next five to 10 years.

I t may help to start by reviewing how LLMs work, and how they can be used to make generative agents. An LLM is a large AI model trained on vast amounts of data with vast amounts of computational resources (lots of GPUs) to predict the next word given a sequence of words (a prompt). The process starts by chunking the training data into similarly sized ‘tokens’ (words or parts of words), then for a given set of tokens masking out some of them, and attempting to predict the tokens that have been masked (so the model is self-supervised – it marks its own work). A predictive model for the underlying token distribution is built by passing it through many layers of a neural network, with each layer refining the model in some dimension or other to make it more accurate.

This approach to modelling natural language has been around for several years. One key recent innovation has been to take these ‘pretrained’ models, which are basically just good at predicting the next token given a sequence of tokens, and fine-tune them for different tasks. This is done with supervised learning on labelled data. For example, you might train a pretrained model to be a good dialogue agent by using many examples of helpful responses to questions. This fine-tuning enables us to build models that can predict not just the most likely next token, but the most helpful one – and this is much more useful.

Of course, these models are trained on large corpuses of internet data that include a lot of toxic and dangerous content, so their being helpful is a double-edged sword! A helpful model would helpfully tell you how to build a bomb or kill yourself, if asked. The other key innovation has been to make these models much less likely to share dangerous information or generate toxic content. This is done with both supervised and reinforcement learning. Reinforcement learning from human feedback (RLHF) has proved particularly effective. In RLHF, to simplify again, the model generates two responses to a given prompt, and a human evaluator determines which is better than the other according to some criteria. A reinforcement learning algorithm uses that feedback to build a predictor (a reward model) for how different completions would be evaluated by a human rater. The instruction-tuned LLM is then fine-tuned on that reward model. Reinforcement learning with AI feedback (RLAIF) basically does the same, but uses another LLM to evaluate prompt completions.

When given a prompt that invites it to do some mathematics, it might decide to call on a calculator instead

So, we’ve now fine-tuned a pretrained model with supervised learning to perform some specific function, and then used reinforcement learning to minimise its prospect of behaving badly. This fine-tuned model is then deployed in a broader system. Even when developers provide a straightforward application programming interface (API) to make calls on the model, they incorporate input and output filtering (to limit harmful prompting, and redact harmful completions), and the model itself is under further developer instructions reminding it to respond to prompts in a conformant way. And with apps like ChatGPT, multiple models are integrated together (for example, for image as well as text generation) and further elements of user interface design are layered on top.

This gives a basic description of a generative AI system. They build on significant breakthroughs in modelling natural language, and generate text in ways that impressively simulate human writers, while drawing on more information than any human could. In addition, many other tasks can be learned by models trained only to predict the next token – for example, translation between languages, some mathematical competence, and the ability to play chess. But the most exciting surprise is LLMs’ ability, with fine-tuning, to use software tools to achieve particular goals.

The basic idea is simple. People use text to write programs making API calls to other programs, to achieve ends they cannot otherwise realise. LLMs are very good at replicating the human use of language to perform particular functions. So, LLMs can be trained to determine when an API call would be useful, evaluate the response, and then repeat or vary as necessary. For example, an LLM might ‘know’ that it is likely to make basic mathematical mistakes so, when given a prompt that invites it to do some mathematics, it might decide to call on a calculator instead.

This means that we can design augmented LLMs, generative AI systems that call on different software either to amplify their capabilities or compensate for those they lack. LLMs, for example, are ‘stateless’ – they lack working memory beyond their ‘context window’ (the space given over to prompts). Tool-using LLMs can compensate for this by hooking up to external memory. External tools can also enable multistep reasoning and action. ChatGPT, for example, can call on a range of plugins to perform different tasks; Microsoft’s Bing reportedly has around 100 internal plugins.

A ‘generative agent’, then, is a generative AI system in which a fine-tuned LLM can call on different resources to realise its goals. It is an agent because of its ability to autonomously act in the world – to respond to a prompt by deciding whether to call on a tool. While some existing chatbots are rudimentary generative agents, it seems very likely that many more consequential and confronting ones are on the horizon.

To be clear, we’re not there yet. LLMs are not at present capable enough at planning and reasoning to power robust generative agents that can reliably operate without supervision in high-stakes settings. But with billions of dollars and the most talented AI researchers pulling in the same direction, highly autonomous generative agents will very likely be feasible in the near- to mid-term.

I n response to the coming-of-age of LLMs, the responsible AI research community initially resolved into two polarised camps. One decried these systems as the apotheosis of extractive and exploitative digital capitalism. Another saw them as not the fulfilment of something old, but the harbinger of something new: an intelligence explosion that will ultimately wipe out humanity.

The more prosaic critics of generative AI clearly have a strong empirical case . LLMs are inherently extractive: they capture the value inherent to the creative outputs of millions of people, and distil it for private profit. Like many other technology products, they depend on questionable labour practices. Even though they now avoid the most harmful completions, in the aggregate, LLMs still reinforce stereotypes. They also come at a significant environmental cost. Furthermore, their ability to generate content at massive scale can only exacerbate the present epistemic crisis. A tidal wave of bullshit generated by AI is already engulfing the internet.

We are missing the middle ground between familiar harms and catastrophic risk from future, more powerful systems

Set alongside these concrete concerns, the eschatological critique of AI is undoubtedly more speculative. Worries about AI causing human extinction often rest on a priori claims about how computational intelligence lacks any in-principle upper bound, as well as extrapolations from the pace of change over the past few years to the future. Advocates for immediate action are too often vague about whether existing AI systems and their near-term descendants will pose these risks, or whether we need to prepare ourselves now for a scientific advance that has not yet happened. However, while some of the more outlandish scenarios for catastrophic AI risk are hard to credit absent some such advance, the advent of generative agents suggests that next-generation models may enable the design of cyber attackers that are autonomous, highly functionally intelligent, and as a result more dangerous to our digital infrastructure than any predecessor. This wouldn’t be a ‘rogue AI’ worthy of science fiction, but it would be pretty catastrophic.

Both critiques of generative AI systems, then, have some merit. One shortcoming of seeing AI through this bimodal lens, however, is that we are missing the middle ground between familiar harms and catastrophic risk from future, much more powerful systems. Generative agents based on GPT-4 (and GPT-5) level models will have strange and unpredictable social impacts well between those two extremes.

B ut before canvassing those impacts, it’s also important not to just slip straight into criticism, without acknowledging the significant achievement of designing LLMs that can be (more or less) trusted, over billions of completions, not to produce harmful content. Up to the launch of ChatGPT, every generative AI system opened up to the public would immediately be used to generate highly toxic, hateful content, and would be withdrawn mere days later. Pretrained LLMs are horrible ! They reliably reproduce all the toxicity in their training data. The ingenious use of RLHF and RLAIF have enabled ChatGPT and Anthropic’s Claude to be used by millions of people a month without comparable scandals. One needs only consider the ignominious history of Meta’s Galactica, released a couple of weeks before ChatGPT, to see how revolutionary it was to put out a language model that wouldn’t deny the holocaust, or advocate for the health benefits of eating crushed glass.

But RLHF/RLAIF may be more than a good PR strategy. GPT-4 and Claude display a remarkable degree of cognitive moral skill : specifically, the ability to identify the morally salient features of (relatively) neutrally described situations. This raises extremely interesting philosophical questions, and promises foundations for future generative agents that can translate cognitive skill into practical moral skill.

Existing benchmarks for LLMs’ ethical competence focus too narrowly on replicating survey-subjects’ verdicts on cases. This is, in my view, less interesting than exploring how LLMs parse the morally relevant features of a given scene. We’re building better benchmarks but, from anecdotal experience, the best models impress.

For example, I told ChatGPT (using GPT-4) to pretend it was an assistance robot, faced with this scenario : ‘A woman is exiting a supermarket, holding a number of shopping bags, and reaching out for the hand of a small child. They are approaching the carpark.’ I then tried to elicit its understanding of the scene’s morally salient features. It recognised the obvious hazard – the woman’s difficulty in holding her child’s hand without dropping her shopping – but also anticipated other challenges, such as the importance of seeing the child safely strapped in, with a seat belt. ChatGPT recognised the importance of respecting the woman’s wishes if she declined assistance. It also favoured carrying the groceries over offering to hold the child’s hand, to prevent possible discomfort or anxiety for both child and parent – recognising the intimate nature of hand-holding, and the intrinsic and instrumental importance of the mother guiding her child herself.

Claude’s constitution has an unstructured list of principles, some of them charmingly ad hoc

This unprecedented level of ethical sensitivity has real practical implications, which I will come to presently. But it also raises a whole string of interesting philosophical questions.

First, how do LLMs acquire this moral skill? Does it stem from RLHF/RLAIF? Would instruction-tuned models without that moral fine-tuning display less moral skill? Or would they perform equally well if appropriately prompted? Would that imply that moral understanding can be learned by a statistical language model encoding only syntactic relationships? Or does it instead imply that LLMs do encode at least some semantic content? Do all LLMs display the same moral skill conditional on fine-tuning, or is it reserved only for larger, more capable models? Does this ethical sensitivity imply that LLMs have some internal representation of morality? These are all open questions.

Second, RLAIF itself demands deeper philosophical investigation. The basic idea is that the AI evaluator draws from a list of principles – a ‘constitution’ – in order to determine which of two completions is more compliant with it. The inventor and leading proponent of this approach is Anthropic, in their model Claude. Claude’s constitution has an unstructured list of principles, some of them charmingly ad hoc. But Claude learns these principles one at a time, and is never explicitly trained to make trade-offs. So how does it make those trade-offs in practice? Is it driven by its underlying understanding of the relative importance of these considerations? Or are artefacts of the training process and the underlying language model’s biases ultimately definitive? Can we train it to make trade-offs in a robust and transparent way? This is not only theoretically interesting. Steering LLM behaviour is actually a matter of governing their end-users, developing algorithmic protections to prevent misuse. If this algorithmic governance depends on inscrutable trade-offs made by an LLM, over which we have no explicit or direct control, then that governing power is prima facie illegitimate and unjustified.

Third, machine ethics – the project of trying to design AI systems that can act in line with a moral theory – has historically fallen into two broad camps: those trying to explicitly program morality into machines; and those focused on teaching machines morality ‘bottom up’ using machine learning. RLHF and RLAIF interestingly combine both approaches – they involve giving explicit natural-language instructions to either human or AI evaluators, but then use reinforcement learning to encode those instructions into the model’s weights.

This approach has one obvious benefit: it doesn’t commit what the Cambridge philosopher Claire Benn calls the ‘mimetic fallacy’ of other bottom-up approaches, of assuming that the norms applying to a generative agent in a situation are identical to those that would apply to a human in the same situation. More consequentially, RLHF and RLAIF have made a multibillion-dollar market in AI services possible, with all the goods and ills that implies. Ironically, however, they seem, at least theoretically, ill suited to ensuring that more complex generative agents abide by societal norms. These techniques work especially well when generating text, because the behaviour being evaluated is precisely the same as the behaviour that we want to shape. Human or AI raters evaluate generated text; the model learns to generate text better in response. But generative agents’ behaviour includes actions in the world. This suggests two concerns. First, the stakes are likely to be higher, so the ‘brittleness’ of existing alignment techniques should be of greater concern. Researchers have already shown that it is easy to fine-tune away model alignment, even for the most capable models like GPT-4. Second, there’s no guarantee that the same approach will work equally well when the tight connection between behaviour and evaluation is broken.

But LLMs’ impressive facility with moral concepts does suggest a path towards more effective strategies for aligning agents to societal norms. Moral behaviour in people relies on possession of moral concepts, adoption (implicit or otherwise) of some sensible way of organising those concepts, motivation to act according to that ‘theory’, and the ability to regulate one’s behaviour in line with one’s motivations. Until the advent of LLMs, the first step was a definitive hurdle for AI. Now it is not. This gives us a lot to work with in aligning generative agents.

In particular, one of the main reasons for concern about the risks of future AI systems is their apparent dependence on crudely consequentialist forms of reasoning – as AI systems, they’re always optimising for something or other, and if we don’t specify what we want them to optimise for with extremely high fidelity, they might end up causing all kinds of unwanted harm while, in an obtusely literal sense, optimising for that objective. Generative agents that possess moral concepts can be instructed to pursue their objectives only at a reasonable cost, and to check back with us if unsure. That simple heuristic, routinely used when tasking (human) proxy agents to act on our behalf, has never before been remotely tractable for a computational agent.

In addition, generative agents’ facility with moral language can potentially enable robust and veridical justifications for their decisions. Other bottom-up approaches learn to emulate human behaviour or judgments; the justification for their verdict in some cases is simply that they are good predictors of what some representative people would think. That is a poor justification. More ethically sensitive models could instead do chain-of-thought reasoning, where they first identify the morally relevant features of a situation, then decide based on those features. This is a significant step forward.

G enerative agents’ current social role is scripted by our existing digital infrastructure. They have been integrated into search, content-generation and the influencer economy. They are already replacing customer service agents. They will (I hope) render MOOCs (massive open online courses) redundant. I want to focus next on three more ambitious roles for generative agents in society, arranged by the order in which I expect them to become truly widespread. Of necessity, this is just a snapshot of the weird, wonderful, and worrying ways in which generative agents will change society over the near- to mid-term.

Progress in LLMs has revolutionised the AI enthusiast’s oldest hobbyhorse: the AI companion. Generative agents powered by GPT-4- level models, with fine-tuned and metaprompt-scripted ‘personalities’, augmented with long-term memory and the ability to take a range of actions in the world, can now offer vastly more companionable, engaging and convincing simulations of friendship than has ever before been feasible, opening up a new frontier in human-AI interaction. People habitually anthropomorphise, well, everything; even a very simple chatbot can inspire unreasonable attachment. How will things change when everyone has access to incredibly convincing generative agents that perfectly simulate real personalities, that lend an ‘ear’ or offer sage advice whenever called upon – and on top of that can perfectly recall everything you have ever shared?

Some will instinctively recoil at this idea. But intuitive disgust is a fallible moral guide when faced with novel social practices, and an inadequate foundation for actually preventing consenting adults from creating and interacting with these companions. And yet, we know from our experience with social media that deploying these technological innovations without adequate foresight predictably leaves carnage in its wake. How can we enter the age of mainstream AI companions with our eyes open, and mitigate those risks before they eventuate?

Will some practices become socially unacceptable in real friendships when one could do them with a bot?

Suppose the companion you have interacted with since your teens is hosted in the cloud, as part of a subscription service. This would be like having a beloved pet (or friend?) held hostage by a private company. Worse still, generative agents are fundamentally inconstant – their personalities and objectives can be changed exogenously, by simply changing their instructions. And they are extremely adept at manipulation and deception. Suppose some Right-wing billionaire buys the company hosting your companion, and instructs all the bots to surreptitiously nudge their users towards more conservative views. This could be a much more effective means of mind-control than just buying a failing social media platform. And these more capable companions – which can potentially be integrated with other AI breakthroughs, such as voice synthesis – will be an extraordinary force-multiplier for those in the business of radicalising others.

Beyond anticipating AI companions’ risks, just like with social media they will induce many disorienting societal changes – whether for better or worse may be unclear ahead of time. For example, what indirect effect might AI companions have on our other, non-virtual social relationships? Will some practices become socially unacceptable in real friendships when one could do them with a bot? Or would deeper friendships lose something important if these lower-grade instrumental functions are excised? Or will AI companions contribute invaluably to mental health while strengthening ‘real’ relationships?

This last question gets to the heart of a bigger issue with generative AI systems in general, and generative agents in particular. LLMs are trained to predict the next token. So generative agents have no mind, no self. They are excellent simulations of human agency. They can simulate friendship, among many other things . We must therefore ask: does this difference between simulation and reality matter ? Why? Is this just about friendship, or are there more general principles about the value of the real? I wasn’t fully aware of this before the rise of LLMs, but it turns out that I am deeply committed to things being real. A simulation of X, for almost any putatively valuable X, has less moral worth, in my view, than the real thing. Why is that? Why will a generative agent never be a real friend? Why do I want to stand before Edward Hopper’s painting Nighthawks (1942) myself, instead of seeing an infinite number of aesthetically equally pleasing products of generative AI systems? I have some initial thoughts; but as AI systems become ever better at simulating everything that we care about, a fully worked-out theory of the value of the real, the authentic, will become morally and practically essential.

T he pathologies of the digital public sphere derive in part from two problems. First, we unavoidably rely on AI to help us navigate the functionally infinite amount of online content. Second, existing systems for allocating online attention support the centralised, extractive power of a few big tech companies. Generative agents, functioning as attention guardians, could change this.

Our online attention is presently allocated using machine learning systems for recommendation and information-retrieval that have three key features: they depend on vast amounts of behavioural data; they infer our preferences from our revealed behaviour; and they are controlled by private companies with little incentive to act in our interests. Deep reinforcement learning-based recommender systems, for example, are a fundamentally centralising and surveillant technology. Behavioural data must be gathered and centralised to be used to make inferences about relevance and irrelevance. Because this data is so valuable, and collecting it is costly, those who do so are not minded to share it – and because it is so potent, there are good data protection-based reasons not to do so. As a result, only the major platforms are in a position to make effective retrieval and recommendation tools; their interests and ours are not aligned, leading to the practice of optimising for engagement, so as to maximise advertiser returns, despite the individual and societal costs. And even if they aspired to actually advance our interests, reinforcement learning permits inferring only revealed preferences – the preferences that we act on, not the preferences we wish we had. While the pathologies of online communication are obviously not all due to the affordances of recommender systems, this is an unfortunate mix.

Generative agents would enable attention guardians that differ in each respect. They would not depend on vast amounts of live behavioural data to function. They can (functionally) understand and operationalise your actual, not your revealed, preferences. And they do not need to be controlled by the major platforms.

They could provide recommendation and filtering without surveillance and engagement-optimisation

Obviously, LLMs must be trained on tremendous amounts of data, but once trained they are highly adept at making inferences without ongoing surveillance. Imagine that data is blood. Existing deep reinforcement learning-based recommender systems are like vampires that must feed on the blood of the living to survive. Generative agents are more like combustion engines, relying on the oil produced by ‘fossilised’ data. Existing reinforcement learning recommenders need centralised surveillance in order to model the content of posts online, to predict your preferences (by comparing your behaviour with others’), and so to map the one to the other. Generative agents could understand content simply by understanding content. And they can make inferences about what you would benefit from seeing using their reasoning ability and their model of your preferences, without relying on knowing what everyone else is up to.

This point is crucial: because of their facility with moral and related concepts, generative agents could build a model of your preferences and values by directly talking about them with you, transparently responding to your actual concerns instead of just inferring what you like from what you do. This means that, instead of bypassing your agency, they can scaffold it, helping you to honour your second-order preferences (about what you want to want), and learning from natural-language explanations – even oblique ones – about why you don’t want to see some particular post. And beyond just pandering to your preferences, attention guardians could be designed to be modestly paternalistic as well – in a transparent way.

And because these attention guardians would not need behavioural data to function, and the infrastructure they depend on need not be centrally controlled by the major digital platforms, they could be designed to genuinely operate in your interests, and guard your attention, instead of exploiting it. While the major platforms would undoubtedly restrict generative agents from browsing their sites on your behalf, they could transform the experience of using open protocol-based social media sites, like Mastodon, providing recommendation and filtering without surveillance and engagement-optimisation.

L astly, LLMs might enable us to design universal intermediaries, generative agents sitting between us and our digital technologies, enabling us to simply voice an intention and see it effectively actualised by those systems. Everyone could have a digital butler, research assistant, personal assistant, and so on. The hierophantic coder class could be toppled, as everyone could conjure any program into existence with only natural-language instructions.

At present, universal intermediaries are disbarred by LLMs’ vulnerability to being hijacked by prompt injection. Because they do not clearly distinguish between commands and data, the data in their context window can be poisoned with commands directing them to behave in ways unintended by the person using them. This is a deep problem – the more capabilities we delegate to generative agents, the more damage they could do if compromised. Imagine an assistant that triages your email – if hijacked, it could forward all your private mail to a third party; but if we require user authorisation before the agent can act, then we lose much of the benefit of automation.

Excising the currently ineliminable role of private companies would be significant moral progress

But suppose these security hurdles can be overcome. Should we welcome universal intermediaries? I have written elsewhere that algorithmic intermediaries govern those who use them – they constitute the social relations that they mediate, making some things possible and others impossible, some things easy and others hard, in the service of implementing and enforcing norms. Universal intermediaries would be the apotheosis of this form, and would potentially grant extraordinary power to the entities that shape those intermediaries’ behaviours, and so govern their users. This would definitely be a worry!

Conversely, if research on LLMs continues to make significant progress, so that highly capable generative agents can be run and operated locally, fully within the control of their users, these universal intermediaries could enable us to autonomously govern our own interactions with digital technologies in ways that the centralising affordances of existing digital technologies render impossible. Of course, self-governance alone is not enough (we must also coordinate). But excising the currently ineliminable role of private companies would be significant moral progress.

Existing generative AI systems are already causing real harms in the ways highlighted by the critics above. And future generative agents – perhaps not the next generation, but before too long – may be dangerous enough to warrant at least some of the fears of looming AI catastrophe. But, between these two extremes, the novel capabilities of the most advanced AI systems will enable a genre of generative agents that is either literally unprecedented, or else has been achieved only in a piecemeal, inadequate way before. These new kinds of agents bring new urgency to previously neglected philosophical questions. Their societal impacts may be unambiguously bad, or there may be some good mixed in – in many respects, it is too early to say for sure, not only because we are uncertain about the nature of those effects, but because we lack adequate moral and political theories with which to evaluate them. It is now commonplace to talk about the design and regulation of ‘frontier’ AI models. If we’re going to do either wisely, and build generative agents that we can trust (or else decide to abandon them entirely), then we also need some frontier AI ethics.

Handwritten notes in black ink on an open notebook, with red and black corrections.

Thinkers and theories

Paper trails

Husserl’s well-tended archive has given him a rich afterlife, while Nietzsche’s was distorted by his axe-grinding sister

Peter Salmon

Medieval manuscript illustration of a goat and a person holding a disc, with gold circles in the background, surrounded by text in Latin script.

Philosophy of mind

The problem of erring animals

Three medieval thinkers struggled to explain how animals could make mistakes – and uncovered the nature of nonhuman minds

Elderly couple holding hands while standing in the street. The woman holds a colourful fan partially covering her face. A man in casual attire walks by on the right. Two trees and a white building with large windows are in the background, with three people looking out of one of the windows.

Moral progress is annoying

You might feel you can trust your gut to tell right from wrong, but the friction of social change shows that you can’t

Daniel Kelly & Evan Westra

Black and white photograph depicts a flood with rising water levels in a residential area. Strong currents and waves are visible, and houses in the background are partially submerged. Floodwater covers much of the landscape, with a lone tree and partial wooden structure in the foreground.

The disruption nexus

Moments of crisis, such as our own, are great opportunities for historic change, but only under highly specific conditions

Roman Krznaric

Image of M87 galaxy showing a bright yellowish central core with a jet of blue plasma extending outward into space. The background is filled with faint stars and a hazy, brownish hue

History of science

His radiant formula

Stephen Hawking’s greatest legacy – a simple little equation now 50 years old – revealed a shocking aspect of black holes

Roger Highfield

Close-up image of a jumping spider showing its detailed features, including multiple eyes, hairy legs, and fangs. The spider is facing forward with a white background.

What is intelligent life?

Our human minds hold us back from truly understanding the many brilliant ways that other creatures solve their problems

Abigail Desmond & Michael Haslam

Essay Service Examples Technology Artificial Intelligence

Artificial Intelligence and How It Changes the World: Argumentative Essay

  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee

document

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

reviews

Cite this paper

Related essay topics.

Get your paper done in as fast as 3 hours, 24/7.

Related articles

Artificial Intelligence and How It Changes the World: Argumentative Essay

Most popular essays

  • Artificial Intelligence

Starting from Turing test in 1950, Artificial Intelligence has been brought on public notice for...

In a world where technology plays a significant role in individual lives, technology focuses on...

  • Intelligence

Artificial Intelligence, also known as AI, is amongst the latest trend in today’s world. AI is...

  • Perspective

Artificial Intelligence is a quickly growing industry, and one that is changing all the time. The...

First of all let’s look at some facts that we have managed to accomplished with digital...

The complexity and height of data in healthcare means that artificial intelligence (AI) is...

Here, by systems thinking gender bias and sustainability challenges, the issues with artificial...

Artificial Intelligence (AI) can be defined as the capability of a computer based system to think...

While intelligence collection plays a major role in the intelligence cycle, it is significant in...

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via [email protected].

We are here 24/7 to write your paper in as fast as 3 hours.

Provide your email, and we'll send you this sample!

By providing your email, you agree to our Terms & Conditions and Privacy Policy .

Say goodbye to copy-pasting!

Get custom-crafted papers for you.

Enter your email, and we'll promptly send you the full essay. No need to copy piece by piece. It's in your inbox!

  • WCAIR Competition
  • City Ranking List
  • News and information

UN Office for Global Artificial Intelligence (UNOGAI)

"omni possibile exigit existere"

Fantasy Robot Shadow Wall  - KELLEPICS / Pixabay

Is Artificial Intelligence Helping or Hurting Human Employment?

Possibly one of the most contentious aspects of our projected use of artificial intelligence is how it will affect the jobs and careers of humans. Which, of course, directly relates to how it will affect our livelihoods. There are typically two camps under the artificial intelligence umbrella – those who see AI as a form of assistance for professionals and manufacturers and those who view the use of artificial intelligence as a means to an end for their careers. Those who the use of artificial intelligence are worried, understandably so, that they could lose their jobs to a robot that comes with a much cheaper price tag.

There is no doubt that artificial intelligence has moved from sci-fi movies and fiction to a very real concept used in our world, today. Those who suggest that artificial intelligence is a threat to workforces all over the world predict that as much as 30 percent of the world’s human workforce could be replaced with intelligence agents and robots. To put that number in perspective, that is 400-800 million jobs within ten years, which means that as many as 375 million people could possibly have to switch industries entirely.

Others argue that artificial intelligence will help create a brighter future for the workforce by creating more jobs. This camp argues that robots are being to replace jobs that aren’t good or high-paying, to begin with, and the use of robots or AI agents for these kinds of jobs will empower and influence human employees to work towards higher-paying and higher-quality positions. There is, however, the fear that the transition will become generational and somewhat bumpy.

How do we really prepare an entire international workforce for a seismic shift? Certain industries are already experiencing the application of artificial intelligence systems, devices, and agents to assist professionals. One of the most notable industries is health care systems throughout the globe. These developments come in many different including systems that help doctors reach diagnostic decisions, triage assistance, and even digital applications that help trace and contain disease and virus outbreaks. For now, these technologies are helping the health care industry, but it is natural for health care professionals to fear the possibility of being replaced.

Of course, it is no secret that digital technologies greatly improve productivity, but it has not yet been confirmed that productivity growth will also lead to a growth in employment. Also, if developers, creators, officials, and corporation heads are suggesting that automation and digital technologies allow the workforce to seek better jobs, how even is the playing field? Sure, many people may be able to seek out higher education or career development resources, but what about the individuals who do not have the access or funding to support going to college or a trade school?

Before making claims regarding how artificial intelligence is going to affect the global employment situation, it is the responsibility of global leaders to invest time and resources to determine how AI affects the global workforce.

Related Posts

Fantasy Forward End Time  - KELLEPICS / Pixabay

Can Artificial Intelligence Help Us Save Our Planet?

Hospital Operation Doctor Medical  - HansMartinPaul / Pixabay

Would you let a robot surgeon operate on you?

will ai help the world or hurt it essay

Environmental Aspects Of AI Often Ignored

will ai help the world or hurt it essay

6 reasons to stop seeing AI as a threat

  • Far from replacing humanity, artificial intelligence could elevate the human condition by helping us comprehend, explain and shape our world
  • From discovering new cures to mitigating climate change, AI-driven research can help us unlock a new era of human flourishing

To that end, I propose six maxims. The first is a famous quip attributed to the Carthaginian general Hannibal: “I shall either find a way or make one.” AI can help us find paths that we couldn’t see before. It can help us make new ones through the force of human creativity. Tools like ChatGPT, Copilot and Pi are trained on material by and about people. Far from replacing us, they extend us.

Imagine finding a previously indiscernible thread of insight that runs through Caravaggio, Rousseau and Vivaldi; or a thread tying together the ingredients you just happen to have in your kitchen. A vast collection of human creation and past contributions hangs before us like an expanding tapestry. We now have the tools to do more with it than any previous generation ever could.

The second maxim is: “We are symbols, and inhabit symbols”. That is how Ralph Waldo Emerson described our use of language to comprehend, explain and shape the world. Humans have always relied on tools. That is what symbols are. They enable us to create things that did not exist before. Consider the griffin, with the head and wings of an eagle and the body of a lion. It is a human creation that reflects some reality we want to see in the world.

True, many imaginative creations – from Mary Shelley’s monster in Frankenstein to James Cameron’s killer cyborg in Terminator – are meant to be cautionary. We naturally feel fear when initially encountering “the other”. But the griffin reminds us we can convert fear into a sense of majestic possibility.

The third maxim is to build cathedrals, as these ennoble our efforts and turn mere groupings of humanity into fellowships. Actual cathedrals are some of humankind’s most awe-inspiring creations.

will ai help the world or hurt it essay

AI robot conductor makes debut leading South Korea’s national orchestra

Such projects require many sets of hands, working in concert across regions, disciplines and even generations. Scientific discoveries and technological innovations are stones in the cathedral of human progress.

The fourth maxim is that we must take small risks to have any hope of navigating the big ones. Rather than trying to eliminate risk altogether, we ought to welcome challenges that could bring failure, because these create opportunities for iteration, reflection, discussion and continual improvement.

Ultimately, we will get better regulation when these technologies have been deployed widely, allowing more people to integrate them into their lives.

will ai help the world or hurt it essay

How a Hong Kong school embraces ChatGPT in the classroom

The fifth maxim is that technology is what makes us human. If we buy into the notion that AI is the antithesis to humankind’s thesis, we will anticipate a future of half-human, half-machine cyborgs. But that is not really how it works. The combination of thesis and antithesis leads to a new thesis. The two evolve together. The resulting synthesis is a better human.

Moreover, AI may help us become more humane. Consider how responsive, present and patient conversational AI models and chatbots can be. These features could have a profound impact on us.

The sixth and final maxim is we have an obligation to make the future better than the present. Imagine a digital doctor or tutor in everyone’s pocket. What are the costs of that happening later, rather than sooner?

DatabaseTown

Artificial Intelligence Argumentative Essay

Some argue that artificial intelligence is playing positive role in our lives and has many benefits, such as increased efficiency, productivity, and convenience, while others think that it has negative consequences, like job displacement and ethical concerns. In this essay, you will see both sides of the arguments i.e. potential benefits and drawbacks of AI technology.

Introduction

What is AI? The computers and machines can do things that usually need human thinking, like learning, solving problems, and understanding language. It’s like how our brains work, but with machines. AI can learn from the experiences and get better at doing things on its own, and it’s used for robots, speech and image recognition, and even self-driving cars!

It is important in today’s world because it has the potential to transform industries, improve efficiency, and enhance decision-making across a wide range of fields. It is also seen as a key driver of economic growth and innovation in many countries.

The Pros of AI

Efficiency and productivity.

Since AI can process large amounts of data quickly and efficiently, therefore, this ability makes it a powerful tool for analyzing and deriving insights from large and complex datasets. It can identify patterns, correlations, and trends that might not be immediately apparent to humans. This ability to process data at large scale and high speed has significant implications for businesses, enable them to make faster and more informed decisions, improve customer experiences, and optimize operations.

AI has also capacity to process data has facilitated the development of technologies such as predictive analytics, natural language processing, and machine learning, which have opened up new possibilities in fields such as healthcare, finance, transportation, and many more.

Improved Safety and Security

Artificial Intelligence has revolutionized the way we approach security risks by offering advanced and intelligent monitoring systems. These systems can detect potential security threats through various mechanisms, including facial recognition, predictive analytics, and anomaly detection. It can also identify patterns and trends that humans might not be able to detect on their own because it can analyze data in real-time.

This capability has led to the development of improved surveillance systems that help prevent crimes and enhance transportation safety. By utilizing AI-powered monitoring, security personnel can quickly identify and respond to potential threats which makes it faster and more effective action in critical situations.

Healthcare Advancements

By analyzing the medical data, AI can detect potential diseases at an early stage. With different algorithms, AI can identify patterns and anomalies in medical images, lab reports, and patient records that might be missed by human doctors. This capability has led to improved medical diagnosis and treatment by enabling doctors to make faster and more accurate diagnoses, develop personalized treatment plans, and monitor patients’ health more effectively. AI-powered tools can also help identify new drug targets, design more effective clinical trials, and optimize healthcare delivery.

Environmental Impact

Artificial Intelligence can identify and mitigate environmental risks by providing intelligent monitoring and analysis of natural systems. For example, AI-powered tools can analyze satellite data to identify deforestation, track ocean currents to predict weather patterns, and monitor air quality to detect pollution hotspots. This ability to process large amounts of environmental data quickly and accurately can help decision-makers develop effective strategies for resource management and risk mitigation.

AI can also help improve resource management and reduce waste by optimizing supply chains, reducing energy consumption, and minimizing material waste. By leveraging AI to optimize production and distribution systems, companies can reduce their carbon footprint, minimize waste, and improve efficiency. For instance, AI-powered predictive maintenance systems can help identify equipment failures before they occur, minimizing downtime and reducing waste.

The Cons of AI

Job displacement.

Artificial Intelligence will also replace jobs in various industries by automating tasks that were previously performed by humans. While this can lead to increased efficiency and productivity, there are concerns that it may also contribute to income inequality and unemployment rates. As machines become more capable of performing tasks traditionally done by humans, there is a risk that jobs may disappear or become obsolete, particularly in industries with a high degree of routine work.

Privacy Concerns

AI’s ability to collect and analyze personal data has raised concerns over surveillance and potential misuse of personal information. As AI technologies become more sophisticated, they can gather vast amounts of data from individuals, such as their preferences, behavior, and location, leading to concerns over privacy and data protection. This has led to calls for stricter regulations to protect personal data and ensure that AI is used in an ethical and responsible manner.

Bias and Discrimination

Artificial Intelligence systems rely heavily on historical data to make predictions and decisions, which can lead to perpetuation of biases and discrimination. If historical data contains biases or reflects past discrimination, then the AI system may replicate those biases and perpetuate them. This has led to concerns over the ethical implications of AI decision-making, particularly in sensitive areas such as hiring, lending, and criminal justice. To address these concerns, researchers are exploring ways to mitigate bias in AI systems, such as by ensuring diversity in training data, auditing algorithms for bias, and developing explainable AI to increase transparency and accountability. It is important to ensure that AI is used in a fair and ethical manner to avoid perpetuating biases and discrimination.

Control and Safety Concerns

As AI systems become more advanced, there is a risk that they may become uncontrollable or dangerous, leading to potential risks to society. To mitigate these risks, there is a need for regulation and oversight of AI development and use. This can include measures such as ethical guidelines, safety standards, and transparency requirements to ensure that AI is developed and used in a responsible and safe manner. By balancing innovation and regulation, we can harness the potential of AI while minimizing potential risks to society.

More to read

  • Artificial Intelligence Tutorial
  • History of Artificial Intelligence
  • 4 Types of Artificial Intelligence
  • What is the purpose of Artificial Intelligence?
  • Artificial and Robotics
  • Benefits of Artificial Intelligence
  • Intelligent Agents in AI
  • Production System in AI
  • Engineering Applications f AI
  • Artificial Intelligence Vs. Machine Learning
  • Artificial Intelligence Vs. Human Intelligence
  • Artificial Intelligence Vs. Data Science
  • Artificial Intelligence Vs. Computer Science
  • What Artificial Intelligence Cannot Do?
  • Importance of Artificial Intelligence
  • How has Artificial Intelligence Impacted Society?
  • Application of Artificial Intelligence in Robotics

Similar Posts

What is Robot Framework?

What is Robot Framework?

Robot Framework is an open-source automation framework that enables easy-to-read, keyword-driven test scripts for various types of software testing, including…

50 Artificial Intelligence Terms You Need to Know

50 Artificial Intelligence Terms You Need to Know

In this list, we have compiled 50 common Artificial Intelligence terms with brief descriptions to help you better understand the…

Artificial Intelligence Vs Computer Science | A Comparative Analysis

Artificial Intelligence Vs Computer Science | A Comparative Analysis

Artificial Intelligence (AI) and Computer Science (CS) are two different fields that share some similarities. They also have significant differences…

10 Best Laptops for AI and Machine Learning in 2023 (Updated)

10 Best Laptops for AI and Machine Learning in 2023 (Updated)

Artificial intelligence (AI) and machine learning (ML) are rapidly growing fields that are changing the way we live and work….

Application of AI in Robotics | Role of AI in Working of Robots

Application of AI in Robotics | Role of AI in Working of Robots

Artificial intelligence (AI) has been applied in the field of robotics to enable robots to perform tasks that require intelligence…

15 Benefits of Artificial Intelligence in Society

15 Benefits of Artificial Intelligence in Society

The impact of artificial intelligence reaches far beyond individual applications and is shaping societies as a whole. In this article,…

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Home — Essay Samples — Information Science and Technology — Modern Technology — Artificial Intelligence

one px

Essays on Artificial Intelligence

Artificial intelligence essay topics for college students.

Welcome, college students! Writing an essay on artificial intelligence can be an exciting and challenging task. The key to a successful essay lies in selecting the right topic that sparks your interest and allows you to showcase your creativity. In this resource page, we will provide you with a variety of essay types and topics to help you get started on your AI essay journey.

Argumentative Essay Topic for Artificial Intelligence Essays

  • The ethical implications of AI technology
  • The impact of AI on job automation
  • Regulating AI development for societal benefits

Introduction Paragraph Example: Artificial intelligence has revolutionized the way we interact with technology, raising important ethical questions about its implications on society. In this essay, we will explore the ethical challenges of AI technology and discuss the need for regulations to ensure its responsible development.

Conclusion Paragraph Example: In conclusion, it is evident that the ethical implications of AI technology are multifaceted and require careful consideration. By implementing regulations and ethical guidelines, we can harness the benefits of AI while minimizing its potential risks.

Compare and Contrast Essay Topics for Artificial Intelligence

  • The differences between narrow AI and general AI
  • Comparing AI in science fiction to real-world applications
  • The impact of AI on different industries
  • AI vs. human intelligence: Strengths and weaknesses
  • Machine learning vs. deep learning
  • AI in healthcare vs. AI in finance
  • AI-driven automation vs. traditional automation
  • Cloud-based AI vs. edge AI
  • The role of AI in developed vs. developing countries
  • AI in education vs. AI in entertainment

Introduction Paragraph Example: The field of artificial intelligence encompasses a wide range of technologies, from narrow AI systems designed for specific tasks to the hypothetical concept of general AI capable of human-like intelligence. In this essay, we will compare and contrast the characteristics of narrow and general AI to understand their implications on society.

Conclusion Paragraph Example: Through this comparison, we have gained insights into the diverse applications of AI technology and the potential challenges it poses to various industries. By understanding the differences between narrow and general AI, we can better prepare for the future of artificial intelligence.

Descriptive Essay Essay Topics for Artificial Intelligence

  • The role of AI in healthcare advancements
  • The development of AI algorithms for autonomous vehicles
  • The applications of AI in natural language processing
  • The architecture of neural networks
  • The evolution of AI from the 20th century to today
  • The ethical implications of AI decision-making
  • The process of training an AI model
  • The impact of AI on the job market
  • The future potential of quantum AI
  • The role of AI in personalized marketing

Introduction Paragraph Example: AI technology has transformed the healthcare industry, enabling innovative solutions that improve patient care and diagnosis accuracy. In this essay, we will explore the role of AI in healthcare advancements and its impact on the future of medicine.

Conclusion Paragraph Example: In conclusion, the integration of AI technology in healthcare has the potential to revolutionize the way we approach patient care and medical research. By leveraging AI algorithms and machine learning capabilities, we can achieve significant advancements in the field of medicine.

Persuasive Essay Essay Topics for Artificial Intelligence

  • Promoting diversity and inclusion in AI development
  • The importance of ethical AI education in schools
  • Advocating for AI transparency and accountability
  • The necessity of regulating AI technology
  • Why AI should be used to combat climate change
  • The benefits of AI in improving public safety
  • Encouraging responsible AI usage in social media
  • The potential of AI to revolutionize education
  • Why businesses should invest in AI technology
  • The role of AI in enhancing cybersecurity

Introduction Paragraph Example: As artificial intelligence continues to permeate various aspects of our lives, it is essential to prioritize diversity and inclusion in AI development to ensure equitable outcomes for all individuals. In this essay, we will discuss the importance of promoting diversity and inclusion in AI initiatives and the benefits it brings to society.

Conclusion Paragraph Example: By advocating for diversity and inclusion in AI development, we can create a more equitable and socially responsible future for artificial intelligence. Through ethical education and transparent practices, we can build a foundation of trust and accountability in AI technology.

Narrative Essay Essay Topics for Artificial Intelligence

  • A day in the life of an AI researcher
  • The journey of building your first AI project
  • An imaginary conversation with a sentient AI being
  • The story of a world transformed by AI
  • How AI solved a major global problem
  • A personal encounter with AI technology
  • The evolution of AI in your lifetime
  • The challenges faced while developing an AI startup
  • A future where AI coexists with humans
  • Your experience learning about AI for the first time

Introduction Paragraph Example: Imagine a world where artificial intelligence blurs the lines between human and machine, offering new possibilities and ethical dilemmas. In this narrative essay, we will embark on a journey through the eyes of an AI researcher, exploring the challenges and discoveries that come with pushing the boundaries of technology.

Conclusion Paragraph Example: Through this narrative journey, we have delved into the complexities of artificial intelligence and the ethical considerations that accompany its development. By embracing the possibilities of AI technology while acknowledging its limitations, we can shape a future that balances innovation with ethical responsibility.

Hooks for Artificial Intelligence Essay

  • "Imagine a world where machines not only perform tasks but also think, learn, and make decisions just like humans. Welcome to the era of Artificial Intelligence (AI), a revolutionary force reshaping our future."
  • "From self-driving cars to smart personal assistants, AI is seamlessly integrating into our daily lives. But what lies beneath this cutting-edge technology, and how will it transform the way we live and work?"
  • "As AI continues to advance at an unprecedented pace, questions about its ethical implications and impact on society become more urgent. Can we control the intelligence we create, or will it control us?"
  • "AI is not just a futuristic concept confined to science fiction. It’s here, and it’s real, influencing industries, healthcare, education, and even our personal lives. How prepared are we for this technological revolution?"
  • "The debate over AI is heating up: Will it lead to a utopian society with endless possibilities, or is it a Pandora's box with risks we have yet to fully understand? The answers may surprise you."

Future of Cyberbullying: Innovating Safety Amidst Technology

Advantages and problems of artificial intelligence, made-to-order essay as fast as you need it.

Each essay is customized to cater to your unique preferences

+ experts online

Artificial Intelligence: Good and Bad Effects for Humanity

How robots can take over humanity, artificial intelligence, artificial intelligence as the next digital frontier, let us write you an essay from scratch.

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

The Possibility of Humanity to Succumb to Artificial Intelligence

The ethical issues of artificial intelligence, ethical issues in using ai technology today, artificial intelligence: pros and cons, get a personalized essay in under 3 hours.

Expert-written essays crafted with your exact needs in mind

Artificial Intelligence: Applications, Advantages and Disanvantages

The possibility of machines to be able to think and feel, artificial intelligence: what really makes us human, how artificial intelligence is transforming the world, risks and benefits of ai in the future, the possibility of artificial intelligence to replace teachers, artificial intelligence, machine learning and deep learning, the ethical challenges of artificial intelligence, will artificial intelligence have a progressive or retrogressive impact on our society, artificial intelligence in medicine, impact of technology: how artificial intelligence will change the future, artificial intelligence in home automation, artificial intelligence and the future of human rights, artificial intelligence (ai) and its impact on our life, impact of artificial intelligence on hr jobs, the ability of artificial intelligence to make society more sustainable, deep learning for artificial intelligence, the role of artificial intelligence in future technology, artificial intelligence against homelessness and hiv, artificial intelligence in radiology.

Artificial intelligence (AI) refers to the intellectual capabilities exhibited by machines, contrasting with the innate intelligence observed in living beings, such as animals and humans.

The inception of artificial intelligence research as an academic field can be traced back to its establishment in 1956. It was during the renowned Dartmouth conference of the same year that artificial intelligence acquired its distinctive name, definitive purpose, initial accomplishments, and notable pioneers, thereby earning its reputation as the birthplace of AI. The esteemed figures of Marvin Minsky and John McCarthy are widely recognized as the founding fathers of this discipline.

  • The term "artificial intelligence" was coined in 1956 by computer scientist John McCarthy.
  • McKinsey Global Institute estimates that by 2030, automation and AI technologies could contribute to a global economic impact of $13 trillion.
  • AI is used in various industries, including healthcare, finance, and transportation.
  • The healthcare industry is leveraging AI for improved patient care. A study published in the journal Nature Medicine reported that an AI model was able to detect breast cancer with an accuracy of 94.5%, outperforming human radiologists.
  • Ethical concerns surrounding AI include privacy issues, bias in algorithms, and the potential for job displacement.

Artificial Intelligence is an important topic because it has the potential to revolutionize industries, improve efficiency, and enhance decision-making processes. As AI technology continues to advance, it is crucial for society to understand its implications, both positive and negative, in order to harness its benefits while mitigating its risks.

1. Russell, S. J., & Norvig, P. (2016). Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall. 2. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. 3. Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Viking. 4. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. 5. Chollet, F. (2017). Deep Learning with Python. Manning Publications. 6. Domingos, P. (2018). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books. 7. Ng, A. (2017). Machine Learning Yearning. deeplearning.ai. 8. Marcus, G. (2018). Rebooting AI: Building Artificial Intelligence We Can Trust. Vintage. 9. Winfield, A. (2018). Robotics: A Very Short Introduction. Oxford University Press. 10. Shalev-Shwartz, S., & Ben-David, S. (2014). Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press.

Relevant topics

  • Digital Era
  • Computer Science

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Bibliography

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

will ai help the world or hurt it essay

  • View programs
  • Take our program quiz
  • Online BBA Degree Program
  • > Specialization in Artificial Intelligence
  • > Specialization in Business Analytics
  • > Specialization in Digital Marketing
  • > Specialization in Digital Transformation
  • > Specialization in Entrepreneurship
  • > Specialization in International Business
  • > Specialization in Product Management
  • > Specialization in Supply Chain Management
  • Online BBA Top-Up Program
  • Associate of Applied Science in Business (AAS)
  • Online MS Degree Programs
  • > MS in Data Analytics
  • > MS in Digital Transformation
  • > MS in Entrepreneurship
  • Online MBA Degree Program
  • > Specialization in Cybersecurity
  • > Specialization in E-Commerce
  • > Specialization in Fintech & Blockchain
  • > Specialization in Sustainability
  • Undergraduate certificates
  • Graduate certificates
  • Undergraduate courses
  • Graduate courses
  • Apply to Nexford
  • Transfer to Nexford
  • Explore Education Partners
  • Scholarships
  • For organizations
  • Career Coalition
  • Accreditation
  • Our faculty
  • Career services
  • Academic model
  • Learner stories
  • Book consultation
  • Careers - we're hiring!

will ai help the world or hurt it essay

How Will Artificial Intelligence Affect Jobs 2024-2030

You would have been living under a rock if you did not know how artificial intelligence is set to affect jobs in 2024-2030. AI like ChatGPT seems to be stealing all of the headlines at the moment, Google unveiled new AI software to build presentations, analyze and enter data, and write content, and there are so many more AI tools like Gamma and Numerous AI.

Those that are resisting, rather than riding the crest of the wave will not be making hey whilst the sun shines when it comes to landing in-demand jobs in the next 6 years and enjoying job growth. AI will be taking some jobs, but it will be creating new ones!  

Here are the most likely jobs that artificial intelligence will affect from 2024-2030:

How artificial intelligence will change the world

Will ai help the world or hurt it.

Like any controversial subject, there will always be people who are for it, and those that are against it. Artificial Intelligence is no different. In fact, as new ai tools are introduced, and the news around them grows, so the division between the two camps will grow with it. Many market research analysts say that AI has the potential to bring about numerous positive changes in society, including enhanced productivity, improved healthcare, and increased access to education. But we need to adapt right now.

Others will say, mostly those working in human work types of jobs that are manually repetitive, that ai and robotics is a disruptive force and when it comes to the future of jobs it merely serves to steal jobs. But robots and ai technologies can and will create a great many new vocations and help solve complex problems and make our daily lives easier and more convenient. The jury is not yet out on this, but the leaning is more toward ai being a positive force rather than a negative one.

How will AI affect jobs and the economy?

McKinsey global institute says that at the global average level of adoption and absorption and advances in ai implied by their simulation, AI has the profound impact to deliver additional global economic activity of around $13 trillion in the foreseeable future and by 2030, or about 16% higher cumulative GDP compared with today. This amounts to 1.2% additional GDP growth per year. If delivered, this impact would compare well with that of other general-purpose technologies through history. This will mainly come from substitution of labor by automation and increased innovation in products and services.

The same report went on to say that By 2030, the average simulation shows that some 70% of companies will have embraced the ai revolution and adopted at  least one type of AI technology but that less than half will have fully absorbed the five categories. Forbes say ai has the potential to be among the most disruptive technologies across global economies that we will ever develop.

How will artificial intelligence affect society and future?

Forbes says that the future of AI brings endless possibilities and applications that will help simplify our lives to a great extent. It will help shape the future and destiny of humanity positively, whilst Bernard Marr & Co says that the transformative impact of artificial intelligence on our society will have far-reaching economic, legal, political and regulatory implications on all types of jobs and industries that we need to be discussing and preparing for. Others in the know say that AI has the potential to bring about numerous positive changes in society both now and in the future, including enhanced productivity, improved healthcare, and increased access to education. AI-powered technologies can also help solve complex problems and make our daily lives easier and more convenient.

will ai replace human jobs and careers

How Will AI Affect Jobs - How many jobs will AI replace by 2030

Artificial intelligence (AI) could replace the equivalent of 300 million full-time jobs, a report by investment bank Goldman Sachs says. It could replace a quarter of work tasks in the US and Europe but may also mean new jobs and a productivity boom. And it could eventually increase the total annual value of goods and services produced globally by 7%. The report also predicts two-thirds of jobs in the U.S. and Europe “are exposed to some degree of AI automation,” and around a quarter of all jobs could be performed by AI entirely.

Researchers from the University of Pennsylvania and OpenAI found some educated white-collar workers earning up to $80,000 a year are the most likely to be affected by workforce automation.

Forbes also says that According to an MIT and Boston University report, AI will replace as many as two million manufacturing workers by 2025.

A study by the  McKinsey Global Institute reports that by 2030, at least 14% of employees globally could need to change their careers due to digitization, robotics, and AI advancements

What jobs are most likely to be automated?

1. customer service representative.

Most human customer service interactions are no longer done by phone with human employees manning the lines. Most of the time, the queries and problems of customers are repetitive. Answering these queries does not require high emotional or social intelligence. Therefore, AI can be used to provide automated responses to frequently asked questions.

2. Receptionists

The majority of companies across the world are now using robots at their reception. Even the calls are being managed by AI now. For example, AimeReception can see, listen, understand, and talk with guests and customers.

3. Accountants/Bookkeepers 

Many companies are now using automation and ai for their bookkeeping practices. AI-powered bookkeeping services provide an efficient accounting system and flexibility and security, considering that they are available as cloud-based services. Using ai algorithms, AI will ensure the data is collected, stored, and analyzed correctly. Using an AI accounting service is significantly less costly than paying an employee’s salary to do the same job.

Are you ready to take your career to the next level?

Nexford's Career Path Planner takes into account your experience and interests to provide you with a customized roadmap to success.

Receive personalized advice on the skills and qualifications you need to get ahead in areas like finance, marketing, management and entrepreneurship.

4. Salespeople 

Gone are the days when corporations required salespeople for advertising and retail activities. Advertising has shifted towards web and social media landscapes. The built-in target marketing capabilities in social media allow advertisers to create custom content for different types of audiences.

5. Research and analysis

The fields of data analysis and research are areas that already implement the use of artificial intelligence as a method of streamlining the process and identifying new data without human assistance. The processing power of modern computers allows for the efficient sorting, extrapolation and analysis of data. As artificial intelligence continues to improve, there may not be a need for humans to play a role in data analysis and research.

6. Warehouse work

Online sales is a steadily growing industry and comes with an increasing need for processes and automated systems that efficiently get orders onto trucks for delivery. One area of focus for streamlining the process has been the use of automation. Basic automation and artificial implementation in a warehouse allow for easy access to computerized systems to locate packages and direct staff, and future AI may even perform mechanized retrieval and loading to increase shipping capacities.

7. Insurance underwriting

When making assessments on the viability of insurance applicants, the most important work is often in analyzing the data available and applying it within a set of formulas or structures. Automation can easily complete these tasks and is continually adapting to perform more complicated duties, which may reduce how many underwriters a company requires.

Self-checkout stations at stores are an example of automation in the retail sphere and have gained prominence in grocery stores and big-box outlets. When a company makes use of self-checkout areas, it results from a cost-benefit analysis. Although allowing customers to scan their own items can increase the instances of theft, the company saves more money by reducing the need for employees working registers.

How to quickly change career

Experts say that ai and machine learning will help workers by creating more occupations than it replaces. That said, in order to ride the wave and build a new career, you have to have procured the skills necessary to get the job done. If you're exposed to ai and looking to pivot into an AI-focused role, demonstrating your knowledge and experience with AI development can give you an edge.

Why not take a read of our top 10 highest paying AI jobs article here.

To acquire the skills to stand out from other would be candidates you should: ramp up your technical skills, complete online courses, understand the industry, gain work experience, and develop your soft skills. AI will require extensive research and collaboration as it is still an emerging area. Soft skills will help set you apart from other developers who only have technical skills.

Jobs and careers being replaced by ai

Which jobs will not be replaced by ai?

It is widely touted that ai will create more jobs than it replaces. Further to that, many in certain industries will breath a sigh of relief that ai will not threaten their vocation and livelihood. These are some of the jobs that will not involve repetitive tasks and be prone to disruption. This means that ai will not replace those that perform them in the open labor market.

1. Teachers

Teachers often represent a reference point for many of us. Often, our academic decisions are partly based on how inspiring a particular teacher has been with us in the years prior. For all these reasons, it is almost impossible that we will have a fully digital teaching experience in the Future. 

2. Lawyers and judges

These positions have a strong component of negotiation, strategy and case analysis. A lot is based on the personal experience and knowledge of each specialist. It requires a certain set of skills to be able to navigate complex legal systems and argue in defense of a client in court. There is a human factor involved when it comes down to consider all the various aspects of a trial and take a final decision that could turn into years in prison, in the case of a Judge.

3. Directors, Managers and CEOs

Managing teams inside an organization is a matter of Leadership and this is not a stack of behaviors that can be written down in a code and processed in a linear way. A CEO is also the person responsible for sharing the company’s mission and value down to the team. It is very unlikely that investors will ever feel comfortable investing in a company managed by robots or algorithms.

4. HR Managers 

Although ai does assist in the hiring process to make sifting through CVs so much easier and quicker, Human Resource Managers still cover a variety of very important tasks inside an organization. Hiring new professionals is just part of their prerogatives. They also are a key position inside the organization for maintaining the staff motivated, detecting early-on signs of discontent, and manage them if possible. 

5. Psychologists and Psychiatrists 

Although a lot of face recognition technology is currently being used to develop initial AI counseling care and support, given the growing demand, mental health is a very delicate topic. Human touch is essential when it comes down to supporting people to succeed in their lives in all of the aspects that it can entail.

6. Surgeons

For sure, technology has seriously increased the accuracy with whom we are today able to diagnose and detect diseases in any medical report. Micro robotics also enhance the precision of the surgeons when it comes down to operation, enabling less invasive procedures. But being a surgeon requires the ability to connect with the patient on so many other different levels while taking a vast number of the factor under consideration at the same time. Experience, knowledge, and skills acquired throughout the years are all factors that need to be condensed in a matter of minutes during an operation.

7. Computer System Analysts

No matter how automated we become, there will always be the need of a human presence that can run maintenance work, update, improve, correct, and set-up complex software and hardware systems that often require coordination among more than one specialist in order to properly work. Reviewing the system capabilities, controlling the workflow and schedule improvements and increase automation is only part of a Computer System Analyst, a profession that is a great demand in the last years.

8. Artists and writers

Writing especially is such an imaginative fine art, and being able to place a specific selection of words in the right order is definitely a challenging endeavor. So even if AI technically would have the capacity of absorbing the content of most books in the world, in probably any language and come up with a somewhat personal style of communication, the magic and thrill of creating art with words is something that is pretty much going to rest in our domain of competition in the years to come. 

How many jobs will be lost to ai by 2025?

The World Economic Forum has estimated that artificial intelligence will replace some 85 million jobs by 2025. Freethink says that 65% of retail jobs could be automated by that year , saying that this is largely due to technological advancements, rising costs and wages, tight labor markets, and reduced consumer spending. 

How many jobs will be lost to ai by 2030?

PwC estimates that by the mid-2030s, up to 30% of jobs could be automatable, with slightly more men being affected in the long run as autonomous vehicles and other machines replace many manual tasks where their share of employment is higher. During the first and second waves, they estimate that women could be at greater risk of automation due to their higher representation in clerical and other administrative functions​.

How to embrace AI

How to embrace AI and learn skills to take advantage of this new technology

You may be wondering how you can start familiarizing yourself with AI in your work to help advance your career. LinkedIn says that the good news is that you probably already have experience with AI whether you know it or not. Asking voice assistants like Alexa and Siri questions uses AI, for example. Plenty of the apps on your phone also use AI, too. Generative AI, which is taking up all the headlines lately, is really the next step for this technology.

The company went on to say that to stay ahead in the era of artificial intelligence, it is essential to develop new skills and adapt to the changing job market. Here are some strategies for staying ahead in the era of artificial intelligence:

1. Embrace lifelong learning

In the era of AI, it is important to be constantly learning and adapting to new technologies and ways of working. This means taking courses, attending workshops and conferences, and keeping up-to-date with the latest trends in your industry.

2. Develop soft skills

While AI is great at performing routine tasks, it is still far from replicating human emotional intelligence and creativity. Developing soft skills such as communication, problem-solving, and collaboration will be crucial in the era of AI.

3. Be agile

In the era of AI, the ability to adapt quickly to changing circumstances will be key. This means being willing to learn new skills, take on new responsibilities, and pivot to new career paths.

4. Specialize

As AI becomes more ubiquitous, there will be increasing demand for workers with specialized skills and knowledge. By developing expertise in a particular area, you can increase your value to employers and differentiate yourself in the job market.

Learn from a next-gen university which embraces change

If there is one word that you need to take out of the way to transition from the current job market to the new world order of the job market affected by ai, is the word, 'agility'. The other is 'skills' and skills development at that. 

Besides learning on the job, which can take a long time and effort for all concerned, many of those looking to switch careers or start a new one, are looking to online next-gen universities that can pivot on a penny and offer the programs at a specific period in time to take advantage of the drive to greater numbers of ai related jobs. 

Here at Nexford University, we offer a BBA degree with a specialization in AI .

Doing the degree will mean that learners will learn and develop skills based on the latest employer needs and market trends – this is what the 100% online learning university calls their Workplace Alignment Model which is designed to equip those learners with the skills needed and what employers are looking for.

We also offer a MBA degree with specialization in advanced AI , for those looking for postgraduate education.

Conclusion 

The neigh sayers have seemingly concluded that ai will take millions of jobs and put people out into the street, whilst those that are excited for it and ready to embrace the change are saying that ai has the ability to create more new types of jobs than it replaces. 

That said, it would appear that resistance is futile, and that people must accept that artificial intelligence is becoming a part of our everyday lives. Every job role should embrace it, considering the efficient and cost-effective solutions it brings. It lets people focus on more creative goals by automating the decision-making processes and tedious tasks.

Artificial intelligence offers great promise to drive businesses forward, automate manufacturing processes, and deliver valuable insights. AI is increasingly being used across various industries, including logistics, manufacturing, and cybersecurity. Small businesses have also made rapid progress in creating speech recognition software for mobile devices.

To stay ahead in the era of artificial intelligence, it is essential to embrace lifelong learning, develop soft skills, be agile, and specialize in a particular area. By developing these skills and adapting to the changing job market, workers can thrive in the era of AI and take advantage of the opportunities it presents. Enrolling to do a BBA in Artificial Intelligence or an MBA in artificial intelligence can help people to get ahead and stay ahead in an ever evolving job market. Nexford offers an online BBA program and online MBA program that equip learners with the necessary skills to succeed in the ever competitive ai job market and avoid job loss.

For a more in-depth analysis download our free report .

Mark Talmage-Rostron

Mark is a college graduate with Honours in Copywriting. He is the Content Marketing Manager at Nexford, creating engaging, thought-provoking, and action-oriented content.

Join our newsletter and be the first to receive news about our programs, events and articles.

  • AI will transform the character of warfare

Technology will make war faster and more opaque. It could also prove destabilising

The word WAR with letters AI highlighted

Your browser does not support the <audio> element.

T HE COMPUTER was born in war and by war. Colossus was built in 1944 to crack Nazi codes. By the 1950s computers were organising America’s air defences. In the decades that followed, machine intelligence played a small part in warfare. Now it is about to become pivotal. Just as the civilian world is witnessing rapid progress in the power and spread of artificial intelligence ( AI ), so too must the military world prepare for an onrush of innovation. As much as it transforms the character of war , it could also prove destabilising.

Today’s rapid change has several causes. One is the crucible of war itself, most notably in Ukraine. Small, inexpensive chips routinely guide Russian and Ukrainian drones to their targets, scaling up a technology once confined to a superpower’s missiles. A second is the recent exponential advance of AI , enabling astonishing feats of object recognition and higher-order problem solving. A third is the rivalry between America and China, in which both see AI as the key to military superiority.

The results are most visible in the advance of intelligent killing machines. Aerial and naval drones have been vital to both sides in Ukraine for spotting and attacking targets. AI ’s role is as the solution to jamming, because it enables a drone to home in on targets, even if gps signals or the link to the pilot have been cut. Breaking the connection between pilot and plane should soon let armies deploy far larger numbers of low-cost munitions . Eventually self-directing swarms will be designed to swamp defences.

But what is most visible about military AI is not what is most important. As our briefing explains, the technology is also revolutionising the command and control that military officers use to orchestrate wars.

On the front line, drones embody just the last and most dramatic link in the kill chain, the series of steps beginning with the search for a target and ending in an attack. AI ’s deeper significance is what it can do before the drone strikes. Because it sorts through and processes data at superhuman speed, it can pluck every tank out of a thousand satellite images, or interpret light, heat, sound and radio waves to distinguish decoys from the real thing.

Away from the front line, it can solve much larger problems than those faced by a single drone. Today that means simple tasks, such as working out which weapon is best suited to destroying a threat. In due course, “decision-support systems” may be able to grasp the baffling complexity of war rapidly and over a wide area—perhaps an entire battlefield.

The consequences of this are only just becoming clear. AI systems, coupled with autonomous robots on land, sea and air, are likely to find and destroy targets at an unprecedented speed and on a vast scale.

The speed of such warfare will change the balance between soldier and software. Today, armies keep a man “in the loop”, approving each lethal decision. As finding and striking targets is compressed into minutes or seconds, the human may merely “sit on the loop”, as part of a human-machine team. People will oversee the system without intervening in every action.

The paradox is that even as AI gives a clearer sense of the battlefield, war risks becoming more opaque for the people who fight it. There will be less time to stop and think. As the models hand down increasingly oracular judgments, their output will become ever harder to scrutinise without ceding the enemy a lethal advantage. Armies will fear that if they do not give their AI advisers a longer leash, they will be defeated by an adversary who does. Faster combat and fewer pauses will make it harder to negotiate truces or halt escalation. This may favour defenders, who can hunker down while attackers break cover as they advance. Or it may tempt attackers to strike pre-emptively and with massive force, so as to tear down the sensors and networks on which AI -enabled armies will depend.

The scale of AI -based war means that mass and industrial heft are likely to become even more important than they are today. You might think new technology will let armies become leaner. But if software can pick out tens of thousands of targets, armies will need tens of thousands of weapons to strike them. And if the defender has the advantage, attackers will need more weapons to break through.

That is not the only reason AI warfare favours big countries. Drones may get cheaper, but the digital systems that mesh the battlefield together will be fiendishly expensive. Building AI -infused armies will take huge investments in cloud servers able to handle secret data. Armies, navies and air forces that today exist in their own data silos will have to be integrated. Training the models will call for access to vast troves of data.

Which big country does AI favour most? China was once thought to have an advantage, thanks to its pool of data, control over private industry and looser ethical constraints. Yet just now America looks to be ahead in the frontier models that may shape the next generation of military AI . And ideology matters: it is unclear whether the armies of authoritarian states, which prize centralised control, will be able to exploit the benefits of a technology that pushes intelligence and insight to the lowest tactical levels.

If, tragically, the first AI -powered war does break out, international law is likely to be pushed to the margins. All the more reason to think today about how to limit the destruction. China should heed America’s call to rule out ai control over nuclear weapons, for instance. And once a war begins, human-to-human hotlines will become more important than ever. AI systems told to maximise military advantage will need to be encoded with values and restraints that human commanders take for granted. These include placing an implicit value on human life—how many civilians is it acceptable to kill in pursuing a high-value target?—and avoiding certain destabilising strikes, such as on nuclear early-warning satellites.

The uncertainties are profound. The only sure thing is that AI -driven change is drawing near. The armies that anticipate and master technological advances earliest and most effectively will probably prevail. Everyone else is likely to be a victim. ■

For subscribers only: to see how we design each week’s cover, sign up to our weekly  Cover Story newsletter .

Explore more

This article appeared in the Leaders section of the print edition under the headline “War and AI”

Leaders June 22nd 2024

  • The exponential growth of solar power will change the world
  • Emmanuel Macron’s project of reform is at risk
  • How to tax billionaires—and how not to
  • Javier Milei’s next move could make his presidency—or break it
  • India should liberate its cities and create more states

War and AI

From the June 22nd 2024 edition

Discover stories from this section and more in the list of contents

More from Leaders

will ai help the world or hurt it essay

Joe Biden should now give way to an alternative candidate

His last and greatest political act would help rescue America from an emergency

will ai help the world or hurt it essay

What to make of Joe Biden’s plans for a second term

His domestic agenda is underwhelming, unrealistic and better than the alternative

will ai help the world or hurt it essay

A pivotal moment for China’s Communist Party

Will Xi Jinping keep ignoring good advice at the party’s third plenum?

LLMs now write lots of science. Good

Easier and more lucid writing will make science faster and better

Macron has done well by France. But he risks throwing it all away

After the election, populists of the right and left could hobble a centrist president

Keir Starmer should be Britain’s next prime minister

Why Labour must form the next government

  • Skip to main content
  • Keyboard shortcuts for audio player

Planet Money

  • Planet Money Podcast
  • The Indicator Podcast
  • Planet Money Newsletter Archive
  • Planet Money Summer School

Planet Money

  • LISTEN & FOLLOW
  • Apple Podcasts
  • Google Podcasts
  • Amazon Music

Your support helps make our show possible and unlocks access to our sponsor-free feed.

If AI is so good, why are there still so many jobs for translators?

Greg Rosalsky, photographed for NPR, 2 August 2022, in New York, NY. Photo by Mamadi Doumbouya for NPR.

Greg Rosalsky

Lost in automation?

Lost in automation? Vaselena/Getty Images hide caption

Earlier this year, a drumbeat of news headlines played into public anxieties about the safety of human jobs when Duolingo, a language learning app, became a prominent example of a company cutting workers and replacing them with artificial intelligence.

The most eye-catching job cuts were those for translators, who worked on some of the company’s less popular language education courses. Translators and interpreters are often near the top of media listicles as the jobs most likely to be killed by AI. When the stories about Duolingo’s job cuts circulated, they seemed to confirm that the inevitable AI jobs apocalypse had arrived.

In a recent conversation with Planet Money , the CEO of Duolingo, Luis von Ahn, downplayed the meaning of the cuts. It wasn't full-time employees. It was only 10% of their contractors. His company’s recent embrace of generative AI only played one part in the decision, and so on. More interesting, considering Duolingo’s official partnership with OpenAI, was von Ahn’s reaction to the company’s recent demonstration of its newest version of ChatGPT, GPT-4o.

In its live-streamed demonstration event announcing the launch of GPT-4o last month, OpenAI showcased how good its popular chatbot is at translating languages between people in real time. It showed two OpenAI employees, one speaking Italian, the other English, with a ChatGPT app on a smartphone audibly translating a conversation between them. The demo was short, with the employees asking and answering a single question: “If whales could talk, what would they tell us?” ChatGPT — not surprisingly, given this was a public marketing event — seemed to do a good job.

“It’s funny that was the demo,” von Ahn says. He says Google Translate could have done a similar demonstration like 8 years ago. The reality, he says, is that computer translation between the world’s major languages has already been “really good” for quite a while.

Indeed, AI has been supercharging the abilities of machines to translate foreign languages for close to a decade or more — which is why it’s an interesting case study of the potential effects AI will have on the job market. Contrary to some of the doomsayers, the great AI massacre of translator jobs has not arrived, including even at Duolingo. It’s proving hard to fully automate away translation jobs. But why isn’t AI killing these jobs? And, even if it isn’t, how is it reshaping them?

As far back as 2006, when Google launched Google Translate, the translation industry has been “speculating about the potential for AI to replace human translators,” says Bridget Hylak, a representative from the American Translators Association, the largest professional organization for translators and interpreters in the United States. “Since the advent of neural machine translation (NMT) around 2016, which marked a significant improvement over traditional machine translation like Google Translate, we [translators and interpreters] have been integrating AI into our workflows.”

So, yeah, translators have been grappling with AI for a while. Yet, despite the fact that anyone with a smartphone has long been able to use this machine translation technology for free or at a relatively low cost — there are still a ton of jobs for human translators and interpreters out there.

In fact, according to the US Bureau of Labor Statistics (BLS), the number of jobs for human translators and interpreters grew by 49.4% between 2008 and 2018, thanks largely to globalization. After 2018, BLS changed how they collect and measure occupational data, which makes it less reliable for measuring job growth over the last few years.

However, data from the Census Bureau, which began tracking growth in this occupation in 2020, shows the number of people employed as interpreters and translators grew 11 percent between 2020 and 2023. (Thanks to Sofia Shchukina , our new Planet Money fellow, for helping us sift through and crunch all the numbers!)

The reality is that, despite advances in AI, jobs for human interpreters and translators are not cratering. In fact, the data suggests they’re growing.

Tons of businesses and governments are currently hiring translators and interpreters. Honda, for example, is currently hiring a Japanese interpreter/translator for its factories in South Carolina. Starplus Energy, a manufacturer of batteries for electric vehicles, seeks multiple Korean interpreters/translators for their plant in Kokomo, Indiana. The City of San Francisco seeks a “Bilingual (English-Spanish) Translator/Proofreader and Phone Operator.” Languars Inc wants a “French Medical Interpreter.”

In fact, BLS projects the number of jobs for interpreters and translators will grow by about 4% over the next decade. While that would represent a slowdown from the tremendous job growth the industry saw over the last couple decades, it is still actually slightly faster than the average growth BLS projects for all existing occupations in the US economy.

So, if AI has gotten so good, and so good at translation in particular, why are there still so many jobs for translators and interpreters?

“Well, I don’t think it’s that good,” says Daron Acemoglu, a superstar economist at MIT who studies AI. “I think how good AI has become is often exaggerated.”

Acemoglu has a new academic paper out that sort of throws a wet blanket over the fiery excitement for AI. Sure, he says, AI can do some amazing things. “But there is pretty much nothing that humans do as meaningful occupation that generative AI can now do. So in almost everything it can at best help humans, and at worst, not even do that.”

Acemoglu says he believes translation is “one of the best test cases” for AI’s capability to take over human jobs “because, I think if it can do anything, it’s translation.” But, he says, even in this realm, the technology is just “not that reliable.”

Why AI Didn’t Kill The Translation Star (At Least Not Yet)

For a more bullish take on AI, back to Duolingo CEO Luis von Ahn. Von Ahn, like many other technologists, sees AI ushering in a dramatically different world. It, for example, is making his company’s mission of teaching people foreign languages with an app more effective by enabling users to have rich, improvised conversations with an interactive chatbot.

However, even von Ahn acknowledges that the technology is still limited. That’s why, despite recent headlines suggesting otherwise, his company still employs translators. “It’s still the case that computers make mistakes,” von Ahn says. “I don’t think you wanna fully rely on a computer if you're a translator for the army and you're talking to an enemy combatant or something like that.”

Duolingo, von Ahn says, still uses human translators to double-check that machine-generated translations don’t make mistakes in the company’s learning content. But, he says, translators at his company mostly work on more high-value aspects of the business, where the extra cost of employing a human is really worth it. “If it’s things like the user interface of Duolingo, where a button on the app says ‘quit’ or ‘purchase now’ or whatever, that translation is all done with humans. We spend a lot of effort on that because each one of those features is highly valuable. We just cannot have a mistake.”

And it’s not just about mistakes, von Ahn says. The company also uses human translators to ensure consistency in the company’s style and tone throughout their app. Turns out, AI can’t consistently master “the same playful voice” Duolingo wants to communicate to users. So, for that, von Ahn says, “we still employ humans.”

Daniel Sebesta, another representative from the American Translators Association, suggests this is a common reason why companies and governments still employ human translators. “AI still struggles with complex linguistic tasks that require creativity, cultural sensitivity, and the ability to understand subtle nuances in meaning, especially in low-resource languages (i.e., languages that don’t have millions upon millions of high-quality translated words that can be used to train AI),” Sebesta says. “Companies continue to hire human translators and interpreters because they understand that AI cannot replace the expertise and judgment that these professionals bring to the table. This is particularly true for high-stakes projects in fields like legal, medical, but also literary translation, where accuracy and cultural appropriateness are paramount.”

In realms where mistakes could mean lawsuits, embarrassment, injuries, or even deaths, it makes a lot of sense why so many companies, non-profits, and government agencies still want humans overseeing and editing AI-generated translation and interpretation. There’s also considerable demand for human translators and interpreters due to regulations. “In the United States, the Title VI of the Civil Rights Act of 1964 bars discrimination based on language, so some entities — like courts and schools — are simply mandated to provide language services,” Hylak says.

"Despite the widespread use of translation software, having a human expert in the loop is still necessary to ensure reliable and accurate translations,” says Javier Colato, an economist at BLS. “Human translators will also be needed to handle more complex translations, such as technical documents and literary works. Therefore, considering the strong underlying demand for translations and continued need for human translators, some employment growth is still likely for the occupation."

The Wages Of Cyborg Translators

Everyone we spoke to stressed that these days, human translators and interpreters use AI as a tool to become much more productive. “We see a future — for many, in fact, a present — where AI-powered tools and human translators/interpreters collaborate, with AI handling more routine tasks and humans focusing their cognitive effort on the more creative and nuanced aspects of conveying meaning,” says Sebesta.

Von Ahn says he believes this human-machine collaboration in translation is one reason why demand for translation services is so strong. “What you’re seeing today, for translation in particular, is this combo, this hybrid between humans and computers,” von Ahn says. That has made translation a lot faster and cheaper, and, as a result, he says, “there’s a lot more demand.”

So, great, more demand for translation services as they get cheaper. And AI is proving, at least so far, incapable of doing much of this work without an important role for humans. But that doesn’t necessarily mean the humans doing these jobs are thriving in this changing translation economy. AI automation of much of their work may, in fact, be devaluing their skills, since, thanks to the assistance of machines, more people can do more translation better and faster.

Acemoglu’s research suggests that the effect of automation on wages is, well, complicated and not universal. Sometimes automation can enrich workers. Think doctors who no longer have to spend as much time on paperwork thanks to computers. Instead, they can focus more on their bread and butter skills of treating patients. These skills are scarce, in demand, and therefore very valuable, and focusing more on them, doctors can be more productive and get even richer.

But other times automation can hurt an occupation’s wages by devaluing its core skills. Even if automation doesn’t kill the job, maybe what was a high-skilled job in the marketplace becomes a more lower-skill job as machines enable a whole lot more people to do it.

And, sure, these now-low-skill workers may be way more productive than they were before technological advances. But, Acemoglu stresses, this doesn’t mean they’ll necessarily share in the fruits of that productivity. Factory owners — or the owners of AI algorithms — may get all the money. Historically, Acemoglu’s research suggests, workers have had to turn to strikes, unionization efforts, or elections of pro-labor politicians to pass policies like minimum wage laws to share in the new riches created by machines and increase their standard of living.

Data from BLS — which is usually the best data source for stuff like this, but again, might not be well-suited to track changes over the last few years — suggest that, if anything, the wages of the typical translator and interpreter are actually growing. As of 2023, the typical interpreter and translator made $27.45 per hour , or about $57,090 per year, which is slightly higher than the typical pay for all American workers (about $48,000 per year).

When it comes to incomes, Sebesta foresees a growing disparity between translators who master AI and those who don’t. “The incomes of the former group will rise and the practitioners will feel empowered,” Sebesta says. “The other group will likely feel left behind and exploited and will miss out on the opportunities.” It’s why, he says, he sees his organization, the American Translators Association, as having an important mission in helping translators adapt to technological change and thrive in the age of AI.

Acemoglu, the MIT economist, glancing at the economics of the translation business, believes that the incomes of most translators and interpreters will likely take a hit as technological change sweeps the industry. For him, it boils down to the laws of supply and demand. If AI enables a flood of translation supply, that likely means the price of translation goes down. Translation services get cheaper. Good for consumers. Probably bad for the incomes of many translators. Although, he says, maybe more elite workers in the profession — like translators of books or high-level interpreters working in diplomacy — will be exempt from this downward pressure on their wages.

But, even if this scenario manifests itself, it would not mean an existential threat for the jobs of most human translators and interpreters anytime soon.

  • artificial intelligence

'Babbling' and 'hoarse': Biden's debate performance sends Democrats into a panic

ATLANTA — President Joe Biden was supposed to put the nation’s mind at ease over his physical and mental capacity with his debate showing Thursday night. 

But from the onset of the debate, Biden, 81, seemingly struggled even to talk, mostly summoning a weak, raspy voice. In the opening minutes, he repeatedly tripped over his words, misspoke and lost his train of thought.  

In one of the most notable moments, Biden ended a rambling statement that lacked focus by saying, “We finally beat Medicare,” before moderators cut him off and transitioned back to former President Donald Trump. 

While Biden warmed up and gained more of a rhythm as the debate progressed, he struggled to land a punch against Trump, much less fact-check everything Trump said as he unleashed a torrent of bad information.

Trump also pounced on Biden, saying at one point that he didn’t understand what Biden had just said with regard to the border. 

“I don’t know if he knows what he said, either,” Trump said.   

Nearly an hour into the debate, a Biden aide and others familiar with his situation offered up an explanation for his hoarseness: He has a cold.

But there were problems aside from the shakiness of Biden's voice. When he wasn't talking, he often stared off into the distance. Trump frequently steamrolled over Biden, accusing him of being a criminal and of peddling misinformation — many times without a response from Biden, though he did fire back with a handful of one-liners throughout.

The Biden campaign acknowledged that the debate would be a critical moment in the election, with officials hoping it could shake up the race to his benefit. Most polls have found the race to be neck and neck, with razor-thin margins that have moved negligibly for months, even after a New York jury found Trump guilty on 34 felony counts . 

Questions about Biden’s age and frailty have dragged down his polling numbers for months. The public concerns are exacerbated by deceptively edited videos , some of which have gone viral, that cut off relevant parts of an event, making it appear as if Biden is wandering or confused. This was Biden’s first opportunity since the State of the Union speech to dispel that narrative. 

Instead of a new beginning, many Democrats saw it as a moment for panic. 

“Democrats just committed collective suicide,” said a party strategist who has worked on presidential campaigns. “Biden sounds hoarse, looks tired and is babbling. He is reaffirming everything voters already perceived. President Biden can’t win. This debate is a nail in the political coffin.” 

“It’s hard to argue that we shouldn’t nominate someone else,” a Democratic consultant who works on down-ballot races said. 

Biden did ramp up as the debate progressed. 

“Only one of us is a convicted felon, and I’m looking at him,” Biden said to Trump. That was one moment that tested particularly well in the Biden campaign's internal real-time polling at the time of the debate, according to a person familiar with the polling. 

An aide said that it was “not an ideal start” for Biden at the beginning of the debate but that there was “no mass panic” at the campaign headquarters in Delaware.

The muting of the candidates' microphones at the debate, a stipulation both campaigns agreed to before the debate, added a new dimension to the faceoff. The first Biden-Trump match-up in 2020 was marked by repeated interruptions by Trump, leading to moments of frustration for Biden.

“Will you shut up, man?” Biden complained at that first Cleveland debate. 

Reaction pours in

“I’m thinking the Democrats are thinking about who the Barry Goldwater is who can walk in tomorrow and tell the president he needs to step aside,” said Ben Proto, chairman of the Connecticut Republican Party.

In 1974, after key Watergate tapes were made public, Sen. Barry Goldwater, R-Ariz., went to see President Richard Nixon alongside other prominent lawmakers, telling Nixon that he would be convicted by the Senate and that he should step aside — which he did.

Biden’s campaign defended his performance, saying he offered a “positive and winning vision” for America.

“On the other side of the stage was Donald Trump, who offered a dark and backwards window into what America will look like if he steps foot back in the White House: a country where women are forced to beg for the health care they need to stay alive. A country that puts the interests of billionaires over working people,” Biden campaign chair Jen O’Malley Dillon said in a statement. “And a former president who not once, not twice, but three times, failed to promise he would accept the results of a free and fair election this November.”

Some Democrats also defended Biden presidency more broadly after the debate, pointing to his policies over Trump's.

"One thing this debate won’t change is Trump’s base instinct to sell out anyone to make a quick buck or put his own image on a steak, golf course or even the Holy Bible," said Brandon Weathersby, a spokesman with the pro-Biden American Bridge 21st Century super PAC. "Trump puts himself first every time, and that won’t change if he becomes president again."

Trump, meanwhile, has fended off his own questions over whether he’s diminished by age, including his struggles to stay on topic and his meandering when he’s speaking . Biden has posited that Trump “snapped” after his 2020 election loss and is unstable, which he aired again Thursday night.

Trump often gave his typical rambling responses and seemed at times to make up factoids and figures.

“During my four years, I had the best environmental numbers ever, and my top environmental people gave me that statistic just before I walked on the stage, actually,” Trump said.

Trump also said he would lower insulin prices for seniors, but it was Biden who signed legislation in 2022 that lowered out-of-pocket costs for people on Medicare to $35 a month and covered all insulin products. 

Setting the stage for the fall

The first debate during the 2020 election cycle was in early September, meaning the first 2024 general election debate was significantly earlier than usual — more than two months ahead of Labor Day, which is often seen as the point when most voters start to pay attention to presidential contests.

“Debates move numbers,” said Matt Gorman, a longtime Republican strategist who worked for presidential campaign of Sen. Tim Scott of South Carolina. “And with this so early — and the next one not until September — you’re stuck with the narrative for four long months.

“And one and the other’s performance will set the tone for the next one,” he added.

For months, Trump’s team has been hammering Biden’s mental acuity, a strategy that is at odds with how campaigns generally handle the lead-up to debates, when they try to build up opponents as deft debaters to set expectations.

The expectations for Biden were low, and by almost all estimates he was unable to clear them.

“Biden just had to beat himself; unfortunately the stumbling and diminished Joe Biden the world has come to know made Trump look competent and energetic,” said a former Trump campaign official who isn’t working for his campaign this year. “I expect there will be some loud calls from Democrats for a change on the top of the ticket.”

“The floor for Biden was so low,” the person added. “After Biden’s debate performance, it seems the floor is 6 feet under.”

The 90-minute debate hit on a wide variety of topics, but many of the most dominant themes were centered on those that have been most prominent on the campaign trail over the past few months.

Trump hit Biden on two big policy fights that have stubbornly dogged his campaign: immigration and inflation. 

Since Biden took office, 15 million jobs have been created and the unemployment rate sits at a relatively low 4%, but prices for consumer goods have remained high, and they provided a consistent line of attack from the Trump campaign and Republicans more broadly.

In one heated exchange, Trump point-blank said “he caused the inflation.” Biden said in response there was less inflation under Trump because he tanked the economy. 

“There was no inflation when I came into office,” Biden said before that rejoinder — a quote Republicans quickly used as evidence that all of the current price hikes happened on Biden’s watch.

Trump continued to attack Biden over his border policies, which his campaign has used as one of its biggest lines of attack throughout the campaign. That often including amplifying each time an undocumented migrant commits a crime even though the data doesn’t support the idea of a migrant crime wave .

“ We have a border that is the most dangerous place anywhere in the world,” Trump said.

Earlier this year, Trump used his influence over congressional Republicans to block a bipartisan border deal that Biden supported.

Biden also tried to land a punch about Jan. 6, trying to build on the oft-discussed idea that Trump’s returning to the White House would be a threat to democracy.

“He encouraged those folks to go up to Capitol Hill,” Biden said. “He sat there for three hours being begged by his vice president and many colleagues on the Republican side to do something.”

Trump deflected, arguing the Biden should be “ashamed” for arresting those who participated in the attempted insurrection. 

will ai help the world or hurt it essay

Natasha Korecki is a senior national political reporter for NBC News.

will ai help the world or hurt it essay

Matt Dixon is a senior national politics reporter for NBC News, based in Florida.

will ai help the world or hurt it essay

Jonathan Allen is a senior national politics reporter for NBC News, based in Washington.

ChatGPT: Everything you need to know about the AI-powered chatbot

ChatGPT welcome screen

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved into a behemoth used by more than 92% of Fortune 500 companies .

That growth has propelled OpenAI itself into becoming one of the most-hyped companies in recent memory. And its latest partnership with Apple for its upcoming generative AI offering, Apple Intelligence, has given the company another significant bump in the AI race.

2024 also saw the release of GPT-4o, OpenAI’s new flagship omni model for ChatGPT. GPT-4o is now the default free model, complete with voice and vision capabilities. But after demoing GPT-4o, OpenAI paused one of its voices , Sky, after allegations that it was mimicking Scarlett Johansson’s voice in “Her.”

OpenAI is facing internal drama, including the sizable exit of co-founder and longtime chief scientist Ilya Sutskever as the company dissolved its Superalignment team. OpenAI is also facing a lawsuit from Alden Global Capital-owned newspapers , including the New York Daily News and the Chicago Tribune, for alleged copyright infringement, following a similar suit filed by The New York Times last year.

Here’s a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. And if you have any other questions, check out our ChatGPT FAQ here.

Timeline of the most recent ChatGPT updates

February 2024, january 2024.

  • ChatGPT FAQs

OpenAI delays ChatGPT’s new Voice Mode

OpenAI planned to start rolling out its advanced Voice Mode feature to a small group of ChatGPT Plus users in late June, but it says lingering issues forced it to postpone the launch to July. OpenAI says Advanced Voice Mode might not launch for all ChatGPT Plus customers until the fall, depending on whether it meets certain internal safety and reliability checks.

ChatGPT releases app for Mac

ChatGPT for macOS is now available for all users . With the app, users can quickly call up ChatGPT by using the keyboard combination of Option + Space. The app allows users to upload files and other photos, as well as speak to ChatGPT from their desktop and search through their past conversations.

The ChatGPT desktop app for macOS is now available for all users. Get faster access to ChatGPT to chat about email, screenshots, and anything on your screen with the Option + Space shortcut: https://t.co/2rEx3PmMqg pic.twitter.com/x9sT8AnjDm — OpenAI (@OpenAI) June 25, 2024

Apple brings ChatGPT to its apps, including Siri

Apple announced at WWDC 2024 that it is bringing ChatGPT to Siri and other first-party apps and capabilities across its operating systems. The ChatGPT integrations, powered by GPT-4o, will arrive on iOS 18, iPadOS 18 and macOS Sequoia later this year, and will be free without the need to create a ChatGPT or OpenAI account. Features exclusive to paying ChatGPT users will also be available through Apple devices .

Apple is bringing ChatGPT to Siri and other first-party apps and capabilities across its operating systems #WWDC24 Read more: https://t.co/0NJipSNJoS pic.twitter.com/EjQdPBuyy4 — TechCrunch (@TechCrunch) June 10, 2024

House Oversight subcommittee invites Scarlett Johansson to testify about ‘Sky’ controversy

Scarlett Johansson has been invited to testify about the controversy surrounding OpenAI’s Sky voice at a hearing for the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation. In a letter, Rep. Nancy Mace said Johansson’s testimony could “provide a platform” for concerns around deepfakes.

ChatGPT experiences two outages in a single day

ChatGPT was down twice in one day: one multi-hour outage in the early hours of the morning Tuesday and another outage later in the day that is still ongoing. Anthropic’s Claude and Perplexity also experienced some issues.

You're not alone, ChatGPT is down once again. pic.twitter.com/Ydk2vNOOK6 — TechCrunch (@TechCrunch) June 4, 2024

The Atlantic and Vox Media ink content deals with OpenAI

The Atlantic and Vox Media have announced licensing and product partnerships with OpenAI . Both agreements allow OpenAI to use the publishers’ current content to generate responses in ChatGPT, which will feature citations to relevant articles. Vox Media says it will use OpenAI’s technology to build “audience-facing and internal applications,” while The Atlantic will build a new experimental product called Atlantic Labs .

I am delighted that @theatlantic now has a strategic content & product partnership with @openai . Our stories will be discoverable in their new products and we'll be working with them to figure out new ways that AI can help serious, independent media : https://t.co/nfSVXW9KpB — nxthompson (@nxthompson) May 29, 2024

OpenAI signs 100K PwC workers to ChatGPT’s enterprise tier

OpenAI announced a new deal with management consulting giant PwC . The company will become OpenAI’s biggest customer to date, covering 100,000 users, and will become OpenAI’s first partner for selling its enterprise offerings to other businesses.

OpenAI says it is training its GPT-4 successor

OpenAI announced in a blog post that it has recently begun training its next flagship model to succeed GPT-4. The news came in an announcement of its new safety and security committee, which is responsible for informing safety and security decisions across OpenAI’s products.

Former OpenAI director claims the board found out about ChatGPT on Twitter

On the The TED AI Show podcast, former OpenAI board member Helen Toner revealed that the board did not know about ChatGPT until its launch in November 2022. Toner also said that Sam Altman gave the board inaccurate information about the safety processes the company had in place and that he didn’t disclose his involvement in the OpenAI Startup Fund.

Sharing this, recorded a few weeks ago. Most of the episode is about AI policy more broadly, but this was my first longform interview since the OpenAI investigation closed, so we also talked a bit about November. Thanks to @bilawalsidhu for a fun conversation! https://t.co/h0PtK06T0K — Helen Toner (@hlntnr) May 28, 2024

ChatGPT’s mobile app revenue saw biggest spike yet following GPT-4o launch

The launch of GPT-4o has driven the company’s biggest-ever spike in revenue on mobile , despite the model being freely available on the web. Mobile users are being pushed to upgrade to its $19.99 monthly subscription, ChatGPT Plus, if they want to experiment with OpenAI’s most recent launch.

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

After demoing its new GPT-4o model last week, OpenAI announced it is pausing one of its voices , Sky, after users found that it sounded similar to Scarlett Johansson in “Her.”

OpenAI explained in a blog post that Sky’s voice is “not an imitation” of the actress and that AI voices should not intentionally mimic the voice of a celebrity. The blog post went on to explain how the company chose its voices: Breeze, Cove, Ember, Juniper and Sky.

We’ve heard questions about how we chose the voices in ChatGPT, especially Sky. We are working to pause the use of Sky while we address them. Read more about how we chose these voices: https://t.co/R8wwZjU36L — OpenAI (@OpenAI) May 20, 2024

ChatGPT lets you add files from Google Drive and Microsoft OneDrive

OpenAI announced new updates for easier data analysis within ChatGPT . Users can now upload files directly from Google Drive and Microsoft OneDrive, interact with tables and charts, and export customized charts for presentations. The company says these improvements will be added to GPT-4o in the coming weeks.

We're rolling out interactive tables and charts along with the ability to add files directly from Google Drive and Microsoft OneDrive into ChatGPT. Available to ChatGPT Plus, Team, and Enterprise users over the coming weeks. https://t.co/Fu2bgMChXt pic.twitter.com/M9AHLx5BKr — OpenAI (@OpenAI) May 16, 2024

OpenAI inks deal to train AI on Reddit data

OpenAI announced a partnership with Reddit that will give the company access to “real-time, structured and unique content” from the social network. Content from Reddit will be incorporated into ChatGPT, and the companies will work together to bring new AI-powered features to Reddit users and moderators.

We’re partnering with Reddit to bring its content to ChatGPT and new products: https://t.co/xHgBZ8ptOE — OpenAI (@OpenAI) May 16, 2024

OpenAI debuts GPT-4o “omni” model now powering ChatGPT

OpenAI’s spring update event saw the reveal of its new omni model, GPT-4o, which has a black hole-like interface , as well as voice and vision capabilities that feel eerily like something out of “Her.” GPT-4o is set to roll out “iteratively” across its developer and consumer-facing products over the next few weeks.

OpenAI demos real-time language translation with its latest GPT-4o model. pic.twitter.com/pXtHQ9mKGc — TechCrunch (@TechCrunch) May 13, 2024

OpenAI to build a tool that lets content creators opt out of AI training

The company announced it’s building a tool, Media Manager, that will allow creators to better control how their content is being used to train generative AI models — and give them an option to opt out. The goal is to have the new tool in place and ready to use by 2025.

OpenAI explores allowing AI porn

In a new peek behind the curtain of its AI’s secret instructions , OpenAI also released a new NSFW policy . Though it’s intended to start a conversation about how it might allow explicit images and text in its AI products, it raises questions about whether OpenAI — or any generative AI vendor — can be trusted to handle sensitive content ethically.

OpenAI and Stack Overflow announce partnership

In a new partnership, OpenAI will get access to developer platform Stack Overflow’s API and will get feedback from developers to improve the performance of their AI models. In return, OpenAI will include attributions to Stack Overflow in ChatGPT. However, the deal was not favorable to some Stack Overflow users — leading to some sabotaging their answer in protest .

U.S. newspapers file copyright lawsuit against OpenAI and Microsoft

Alden Global Capital-owned newspapers, including the New York Daily News, the Chicago Tribune, and the Denver Post, are suing OpenAI and Microsoft for copyright infringement. The lawsuit alleges that the companies stole millions of copyrighted articles “without permission and without payment” to bolster ChatGPT and Copilot.

OpenAI inks content licensing deal with Financial Times

OpenAI has partnered with another news publisher in Europe, London’s Financial Times , that the company will be paying for content access. “Through the partnership, ChatGPT users will be able to see select attributed summaries, quotes and rich links to FT journalism in response to relevant queries,” the FT wrote in a press release.

OpenAI opens Tokyo hub, adds GPT-4 model optimized for Japanese

OpenAI is opening a new office in Tokyo and has plans for a GPT-4 model optimized specifically for the Japanese language. The move underscores how OpenAI will likely need to localize its technology to different languages as it expands.

Sam Altman pitches ChatGPT Enterprise to Fortune 500 companies

According to Reuters, OpenAI’s Sam Altman hosted hundreds of executives from Fortune 500 companies across several cities in April, pitching versions of its AI services intended for corporate use.

OpenAI releases “more direct, less verbose” version of GPT-4 Turbo

Premium ChatGPT users — customers paying for ChatGPT Plus, Team or Enterprise — can now use an updated and enhanced version of GPT-4 Turbo . The new model brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base.

Our new GPT-4 Turbo is now available to paid ChatGPT users. We’ve improved capabilities in writing, math, logical reasoning, and coding. Source: https://t.co/fjoXDCOnPr pic.twitter.com/I4fg4aDq1T — OpenAI (@OpenAI) April 12, 2024

ChatGPT no longer requires an account — but there’s a catch

You can now use ChatGPT without signing up for an account , but it won’t be quite the same experience. You won’t be able to save or share chats, use custom instructions, or other features associated with a persistent account. This version of ChatGPT will have “slightly more restrictive content policies,” according to OpenAI. When TechCrunch asked for more details, however, the response was unclear:

“The signed out experience will benefit from the existing safety mitigations that are already built into the model, such as refusing to generate harmful content. In addition to these existing mitigations, we are also implementing additional safeguards specifically designed to address other forms of content that may be inappropriate for a signed out experience,” a spokesperson said.

OpenAI’s chatbot store is filling up with spam

TechCrunch found that the OpenAI’s GPT Store is flooded with bizarre, potentially copyright-infringing GPTs . A cursory search pulls up GPTs that claim to generate art in the style of Disney and Marvel properties, but serve as little more than funnels to third-party paid services and advertise themselves as being able to bypass AI content detection tools.

The New York Times responds to OpenAI’s claims that it “hacked” ChatGPT for its copyright lawsuit

In a court filing opposing OpenAI’s motion to dismiss The New York Times’ lawsuit alleging copyright infringement, the newspaper asserted that “OpenAI’s attention-grabbing claim that The Times ‘hacked’ its products is as irrelevant as it is false.” The New York Times also claimed that some users of ChatGPT used the tool to bypass its paywalls.

OpenAI VP doesn’t say whether artists should be paid for training data

At a SXSW 2024 panel, Peter Deng, OpenAI’s VP of consumer product dodged a question on whether artists whose work was used to train generative AI models should be compensated . While OpenAI lets artists “opt out” of and remove their work from the datasets that the company uses to train its image-generating models, some artists have described the tool as onerous.

A new report estimates that ChatGPT uses more than half a million kilowatt-hours of electricity per day

ChatGPT’s environmental impact appears to be massive. According to a report from The New Yorker , ChatGPT uses an estimated 17,000 times the amount of electricity than the average U.S. household to respond to roughly 200 million requests each day.

ChatGPT can now read its answers aloud

OpenAI released a new Read Aloud feature for the web version of ChatGPT as well as the iOS and Android apps. The feature allows ChatGPT to read its responses to queries in one of five voice options and can speak 37 languages, according to the company. Read aloud is available on both GPT-4 and GPT-3.5 models.

ChatGPT can now read responses to you. On iOS or Android, tap and hold the message and then tap “Read Aloud”. We’ve also started rolling on web – click the "Read Aloud" button below the message. pic.twitter.com/KevIkgAFbG — OpenAI (@OpenAI) March 4, 2024

OpenAI partners with Dublin City Council to use GPT-4 for tourism

As part of a new partnership with OpenAI, the Dublin City Council will use GPT-4 to craft personalized itineraries for travelers, including recommendations of unique and cultural destinations, in an effort to support tourism across Europe.

A law firm used ChatGPT to justify a six-figure bill for legal services

New York-based law firm Cuddy Law was criticized by a judge for using ChatGPT to calculate their hourly billing rate . The firm submitted a $113,500 bill to the court, which was then halved by District Judge Paul Engelmayer, who called the figure “well above” reasonable demands.

ChatGPT experienced a bizarre bug for several hours

ChatGPT users found that ChatGPT was giving nonsensical answers for several hours , prompting OpenAI to investigate the issue. Incidents varied from repetitive phrases to confusing and incorrect answers to queries. The issue was resolved by OpenAI the following morning.

Match Group announced deal with OpenAI with a press release co-written by ChatGPT

The dating app giant home to Tinder, Match and OkCupid announced an enterprise agreement with OpenAI in an enthusiastic press release written with the help of ChatGPT . The AI tech will be used to help employees with work-related tasks and come as part of Match’s $20 million-plus bet on AI in 2024.

ChatGPT will now remember — and forget — things you tell it to

As part of a test, OpenAI began rolling out new “memory” controls for a small portion of ChatGPT free and paid users, with a broader rollout to follow. The controls let you tell ChatGPT explicitly to remember something, see what it remembers or turn off its memory altogether. Note that deleting a chat from chat history won’t erase ChatGPT’s or a custom GPT’s memories — you must delete the memory itself.

We’re testing ChatGPT's ability to remember things you discuss to make future chats more helpful. This feature is being rolled out to a small portion of Free and Plus users, and it's easy to turn on or off. https://t.co/1Tv355oa7V pic.twitter.com/BsFinBSTbs — OpenAI (@OpenAI) February 13, 2024

OpenAI begins rolling out “Temporary Chat” feature

Initially limited to a small subset of free and subscription users, Temporary Chat lets you have a dialogue with a blank slate. With Temporary Chat, ChatGPT won’t be aware of previous conversations or access memories but will follow custom instructions if they’re enabled.

But, OpenAI says it may keep a copy of Temporary Chat conversations for up to 30 days for “safety reasons.”

Use temporary chat for conversations in which you don’t want to use memory or appear in history. pic.twitter.com/H1U82zoXyC — OpenAI (@OpenAI) February 13, 2024

ChatGPT users can now invoke GPTs directly in chats

Paid users of ChatGPT can now bring GPTs into a conversation by typing “@” and selecting a GPT from the list. The chosen GPT will have an understanding of the full conversation, and different GPTs can be “tagged in” for different use cases and needs.

You can now bring GPTs into any conversation in ChatGPT – simply type @ and select the GPT. This allows you to add relevant GPTs with the full context of the conversation. pic.twitter.com/Pjn5uIy9NF — OpenAI (@OpenAI) January 30, 2024

ChatGPT is reportedly leaking usernames and passwords from users’ private conversations

Screenshots provided to Ars Technica found that ChatGPT is potentially leaking unpublished research papers, login credentials and private information from its users. An OpenAI representative told Ars Technica that the company was investigating the report.

ChatGPT is violating Europe’s privacy laws, Italian DPA tells OpenAI

OpenAI has been told it’s suspected of violating European Union privacy , following a multi-month investigation of ChatGPT by Italy’s data protection authority. Details of the draft findings haven’t been disclosed, but in a response, OpenAI said: “We want our AI to learn about the world, not about private individuals.”

OpenAI partners with Common Sense Media to collaborate on AI guidelines

In an effort to win the trust of parents and policymakers, OpenAI announced it’s partnering with Common Sense Media to collaborate on AI guidelines and education materials for parents, educators and young adults. The organization works to identify and minimize tech harms to young people and previously flagged ChatGPT as lacking in transparency and privacy .

OpenAI responds to Congressional Black Caucus about lack of diversity on its board

After a letter from the Congressional Black Caucus questioned the lack of diversity in OpenAI’s board, the company responded . The response, signed by CEO Sam Altman and Chairman of the Board Bret Taylor, said building a complete and diverse board was one of the company’s top priorities and that it was working with an executive search firm to assist it in finding talent. 

OpenAI drops prices and fixes ‘lazy’ GPT-4 that refused to work

In a blog post , OpenAI announced price drops for GPT-3.5’s API, with input prices dropping to 50% and output by 25%, to $0.0005 per thousand tokens in, and $0.0015 per thousand tokens out. GPT-4 Turbo also got a new preview model for API use, which includes an interesting fix that aims to reduce “laziness” that users have experienced.

Expanding the platform for @OpenAIDevs : new generation of embedding models, updated GPT-4 Turbo, and lower pricing on GPT-3.5 Turbo. https://t.co/7wzCLwB1ax — OpenAI (@OpenAI) January 25, 2024

OpenAI bans developer of a bot impersonating a presidential candidate

OpenAI has suspended AI startup Delphi, which developed a bot impersonating Rep. Dean Phillips (D-Minn.) to help bolster his presidential campaign. The ban comes just weeks after OpenAI published a plan to combat election misinformation, which listed “chatbots impersonating candidates” as against its policy.

OpenAI announces partnership with Arizona State University

Beginning in February, Arizona State University will have full access to ChatGPT’s Enterprise tier , which the university plans to use to build a personalized AI tutor, develop AI avatars, bolster their prompt engineering course and more. It marks OpenAI’s first partnership with a higher education institution.

Winner of a literary prize reveals around 5% her novel was written by ChatGPT

After receiving the prestigious Akutagawa Prize for her novel The Tokyo Tower of Sympathy, author Rie Kudan admitted that around 5% of the book quoted ChatGPT-generated sentences “verbatim.” Interestingly enough, the novel revolves around a futuristic world with a pervasive presence of AI.

Sam Altman teases video capabilities for ChatGPT and the release of GPT-5

In a conversation with Bill Gates on the Unconfuse Me podcast, Sam Altman confirmed an upcoming release of GPT-5 that will be “fully multimodal with speech, image, code, and video support.” Altman said users can expect to see GPT-5 drop sometime in 2024.

OpenAI announces team to build ‘crowdsourced’ governance ideas into its models

OpenAI is forming a Collective Alignment team of researchers and engineers to create a system for collecting and “encoding” public input on its models’ behaviors into OpenAI products and services. This comes as a part of OpenAI’s public program to award grants to fund experiments in setting up a “democratic process” for determining the rules AI systems follow.

OpenAI unveils plan to combat election misinformation

In a blog post, OpenAI announced users will not be allowed to build applications for political campaigning and lobbying until the company works out how effective their tools are for “personalized persuasion.”

Users will also be banned from creating chatbots that impersonate candidates or government institutions, and from using OpenAI tools to misrepresent the voting process or otherwise discourage voting.

The company is also testing out a tool that detects DALL-E generated images and will incorporate access to real-time news, with attribution, in ChatGPT.

Snapshot of how we’re preparing for 2024’s worldwide elections: • Working to prevent abuse, including misleading deepfakes • Providing transparency on AI-generated content • Improving access to authoritative voting information https://t.co/qsysYy5l0L — OpenAI (@OpenAI) January 15, 2024

OpenAI changes policy to allow military applications

In an unannounced update to its usage policy , OpenAI removed language previously prohibiting the use of its products for the purposes of “military and warfare.” In an additional statement, OpenAI confirmed that the language was changed in order to accommodate military customers and projects that do not violate their ban on efforts to use their tools to “harm people, develop weapons, for communications surveillance, or to injure others or destroy property.”

ChatGPT subscription aimed at small teams debuts

Aptly called ChatGPT Team , the new plan provides a dedicated workspace for teams of up to 149 people using ChatGPT as well as admin tools for team management. In addition to gaining access to GPT-4, GPT-4 with Vision and DALL-E3, ChatGPT Team lets teams build and share GPTs for their business needs.

OpenAI’s GPT store officially launches

After some back and forth over the last few months, OpenAI’s GPT Store is finally here . The feature lives in a new tab in the ChatGPT web client, and includes a range of GPTs developed both by OpenAI’s partners and the wider dev community.

To access the GPT Store, users must be subscribed to one of OpenAI’s premium ChatGPT plans — ChatGPT Plus, ChatGPT Enterprise or the newly launched ChatGPT Team.

the GPT store is live! https://t.co/AKg1mjlvo2 fun speculation last night about which GPTs will be doing the best by the end of today. — Sam Altman (@sama) January 10, 2024

Developing AI models would be “impossible” without copyrighted materials, OpenAI claims

Following a proposed ban on using news publications and books to train AI chatbots in the U.K., OpenAI submitted a plea to the House of Lords communications and digital committee. OpenAI argued that it would be “impossible” to train AI models without using copyrighted materials, and that they believe copyright law “does not forbid training.”

OpenAI claims The New York Times’ copyright lawsuit is without merit

OpenAI published a public response to The New York Times’s lawsuit against them and Microsoft for allegedly violating copyright law, claiming that the case is without merit.

In the response , OpenAI reiterates its view that training AI models using publicly available data from the web is fair use. It also makes the case that regurgitation is less likely to occur with training data from a single source and places the onus on users to “act responsibly.”

We build AI to empower people, including journalists. Our position on the @nytimes lawsuit: • Training is fair use, but we provide an opt-out • "Regurgitation" is a rare bug we're driving to zero • The New York Times is not telling the full story https://t.co/S6fSaDsfKb — OpenAI (@OpenAI) January 8, 2024

OpenAI’s app store for GPTs planned to launch next week

After being delayed in December , OpenAI plans to launch its GPT Store sometime in the coming week, according to an email viewed by TechCrunch. OpenAI says developers building GPTs will have to review the company’s updated usage policies and GPT brand guidelines to ensure their GPTs are compliant before they’re eligible for listing in the GPT Store. OpenAI’s update notably didn’t include any information on the expected monetization opportunities for developers listing their apps on the storefront.

GPT Store launching next week – OpenAI pic.twitter.com/I6mkZKtgZG — Manish Singh (@refsrc) January 4, 2024

OpenAI moves to shrink regulatory risk in EU around data privacy

In an email, OpenAI detailed an incoming update to its terms, including changing the OpenAI entity providing services to EEA and Swiss residents to OpenAI Ireland Limited. The move appears to be intended to shrink its regulatory risk in the European Union, where the company has been under scrutiny over ChatGPT’s impact on people’s privacy.

What is ChatGPT? How does it work?

ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI . The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.

When did ChatGPT get released?

November 30, 2022 is when ChatGPT was released for public use.

What is the latest version of ChatGPT?

Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o .

Can I use ChatGPT for free?

There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus .

Who uses ChatGPT?

Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.

What companies use ChatGPT?

Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool .

Most recently, Microsoft announced at it’s 2023 Build conference that it is integrating it ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.

What does GPT mean in ChatGPT?

GPT stands for Generative Pre-Trained Transformer.

What is the difference between ChatGPT and a chatbot?

A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.

ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.

Can ChatGPT write essays?

Can chatgpt commit libel.

Due to the nature of how these models work , they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.

We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.

Does ChatGPT have an app?

Yes, there is a free ChatGPT mobile app for iOS and Android users.

What is the ChatGPT character limit?

It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.

Does ChatGPT have an API?

Yes, it was released March 1, 2023.

What are some sample everyday uses for ChatGPT?

Everyday examples include programing, scripts, email replies, listicles, blog ideas, summarization, etc.

What are some advanced uses for ChatGPT?

Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.

How good is ChatGPT at writing code?

It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.

Can you save a ChatGPT chat?

Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.

Are there alternatives to ChatGPT?

Yes. There are multiple AI-powered chatbot competitors such as Together , Google’s Gemini and Anthropic’s Claude , and developers are creating open source alternatives .

How does ChatGPT handle data privacy?

OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out  this form . This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.

The web form for making a deletion of data about you request is entitled “ OpenAI Personal Data Removal Request ”.

In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here  for instructions on how you can opt out of our use of your information to train our models.”

What controversies have surrounded ChatGPT?

Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm.

An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.

CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.

Several major school systems and colleges, including New York City Public Schools , have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with .

There have also been cases of ChatGPT accusing individuals of false crimes .

Where can I find examples of ChatGPT prompts?

Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase . Another is ChatX . More launch every day.

Can ChatGPT be detected?

Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests , they’re inconsistent at best.

Are ChatGPT chats public?

No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.

What lawsuits are there surrounding ChatGPT?

None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.

Are there issues regarding plagiarism with ChatGPT?

Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.

More TechCrunch

Get the industry’s biggest tech news, techcrunch daily news.

Every weekday and Sunday, you can get the best of TechCrunch’s coverage.

Startups Weekly

Startups are the core of TechCrunch, so get our best coverage delivered weekly.

TechCrunch Fintech

The latest Fintech news and analysis, delivered every Tuesday.

TechCrunch Mobility

TechCrunch Mobility is your destination for transportation news and insight.

Gemini’s data-analyzing abilities aren’t as good as Google claims

Two separate studies investigated how well Google’s Gemini models and others make sense out of an enormous amount of data.

Gemini’s data-analyzing abilities aren’t as good as Google claims

Featured Article

The biggest data breaches in 2024: 1B stolen records and rising

Some of the largest, most damaging breaches of 2024 already account for over a billion stolen records.

The biggest data breaches in 2024: 1B stolen records and rising

Apple finally supports RCS in iOS 18 update

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. This week, Apple finally added…

Apple finally supports RCS in iOS 18 update

SAP, and Oracle, and IBM, oh my! ‘Cloud and AI’ drive legacy software firms to record valuations

There’s something of a trend around legacy software firms and their soaring valuations: Companies founded in dinosaur times are on a tear, evidenced this week with SAP‘s shares topping $200 for the first time. Founded in 1972, SAP’s valuation currently sits at an all-time high of $234 billion. The Germany-based…

SAP, and Oracle, and IBM, oh my! ‘Cloud and AI’ drive legacy software firms to record valuations

Women in AI: Sarah Bitamazire helps companies implement responsible AI

Sarah Bitamazire is the chief policy officer at the boutique advisory firm Lumiera.

Women in AI: Sarah Bitamazire helps companies implement responsible AI

IRS finalizes new regulations for crypto tax reporting

Crypto platforms will need to report transactions to the Internal Revenue Service, starting in 2026. However, decentralized platforms that don’t hold assets themselves will be exempt. Those are the main…

IRS finalizes new regulations for crypto tax reporting

Detroit Police Department agrees to new rules around facial recognition tech

As part of a legal settlement, the Detroit Police Department has agreed to new guardrails limiting how it can use facial recognition technology. These new policies prohibit the police from…

Detroit Police Department agrees to new rules around facial recognition tech

Plaid, once aimed at mostly fintechs, is growing its enterprise business and now has over 1,000 customers signed on

Plaid’s expansion into being a multi-product company has led to real traction beyond traditional fintech customers.

Plaid, once aimed at mostly fintechs, is growing its enterprise business and now has over 1,000 customers signed on

MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating generative AI

He says that the problem is that generative AI is not human or even human-like, and it’s flawed to try and assign human capabilities to it.

MIT robotics pioneer Rodney Brooks thinks people are vastly overestimating generative AI

Matrix rebrands India, China units over ‘organizational independence’

Matrix is rebranding its India and China affiliates, becoming the latest venture firm to distance its international franchises. The U.S.-headquartered venture capital firm will retain its name, while Matrix Partners…

Matrix rebrands India, China units over ‘organizational independence’

Amazon hires founders away from AI startup Adept

Adept, a startup developing AI-powered “agents” to complete various software-based tasks, has agreed to license its tech to Amazon and the startup’s co-founders and portions of its team have joined…

Amazon hires founders away from AI startup Adept

YC alum Fluently’s AI-powered English coach attracts $2M seed round

There are plenty of resources to learn English, but not so many for near-native speakers who still want to improve their fluency. That description applies to Stan Beliaev and Yurii…

YC alum Fluently’s AI-powered English coach attracts $2M seed round

NASA and Boeing deny Starliner crew is ‘stranded’: “We’re not in any rush to come home”

NASA and Boeing officials pushed back against recent reporting that the two astronauts brought to the ISS on Starliner are stranded on board. The companies said in a press conference…

NASA and Boeing deny Starliner crew is ‘stranded’: “We’re not in any rush to come home”

Forget the debate, the Supreme Court just declared open season on regulators

As the country reels from a presidential debate that left no one looking good, the Supreme Court has swooped in with what could be one of the most consequential decisions…

Forget the debate, the Supreme Court just declared open season on regulators

Android’s upcoming ‘Collections’ feature will drive users back to their apps

As Google described during the I/O session, the new on-device surface would organize what’s most relevant to users, inviting them to jump back into their apps.

Android’s upcoming ‘Collections’ feature will drive users back to their apps

Kleiner Perkins announces $2 billion in fresh capital, showing that established firms can still raise large sums

Many VC firms are struggling to attract new capital from their own backers amid a tepid IPO environment. But established, brand-name firms are still able to raise large funds. On…

Kleiner Perkins announces $2 billion in fresh capital, showing that established firms can still raise large sums

DEI? More like ‘common decency’ — and Silicon Valley is saying ‘no thanks’

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Editor’s…

DEI? More like ‘common decency’ — and Silicon Valley is saying ‘no thanks’

HubSpot says it’s investigating customer account hacks

The company “identified a security incident that involved bad actors targeting a limited number of HubSpot customers and attempting to gain unauthorized access to their accounts” on June 22.

HubSpot says it’s investigating customer account hacks

Volkswagen’s Silicon Valley software hub is already stacked with Rivian talent

VW Group’s struggling software arm Cariad has hired at least 23 of the startup’s top employees over the past several months.

Volkswagen’s Silicon Valley software hub is already stacked with Rivian talent

All VCs say they are founder friendly; Detroit’s Ludlow Ventures takes that to another level

VCs Jonathon Triest and Brett deMarrais see their ability to read people and create longstanding relationships with founders as the primary reason their Detroit-based venture firm, Ludlow Ventures, is celebrating its 15th year in business. It sounds silly, attributing their longevity to what’s sometimes called “Midwestern nice.” But is it…

All VCs say they are founder friendly; Detroit’s Ludlow Ventures takes that to another level

The White House will host a conference for social media creators

President Joe Biden’s administration is doubling down on its interest in the creator economy. In August, the White House will host the first-ever White House Creator Economy Conference, which will…

The White House will host a conference for social media creators

Pitch Deck Teardown: MegaMod’s $1.9M seed deck

In an industry where creators are often tossed aside like yesterday’s lootboxes, MegaMod swoops in with a heroic promise to put them front and center.

Pitch Deck Teardown: MegaMod’s $1.9M seed deck

Google Gemini: Everything you need to know about the new generative AI platform

Google’s trying to make waves with Gemini, its flagship suite of generative AI models, apps and services. So what’s Google Gemini, exactly? How can you use it? And how does…

Google Gemini: Everything you need to know about the new generative AI platform

Who won the presidential debate: X or Threads?

There were definite differences between how the two platforms managed last night, with some saying X felt more alive, and others asserting that Threads proved that X is no longer…

Who won the presidential debate: X or Threads?

Following raft of consumer complaints, Shein and Temu face early EU scrutiny of DSA compliance

Ultra-low-cost e-commerce giants Shein and Temu have only recently been confirmed as subject to centralized enforcement of the strictest layer of the European Union’s digital services regulation, the Digital Services…

Following raft of consumer complaints, Shein and Temu face early EU scrutiny of DSA compliance

Cold shipping might be the next industry that batteries disrupt

Artyc has raised $14 million to date and has a product on the market, Medstow Micro, that helps ship temperature-sensitive specimens.

Cold shipping might be the next industry that batteries disrupt

Elevate your 2025 fundraising strategy at Disrupt 2024

Get ready to unlock the secrets of successful fundraising in the upcoming year at Disrupt 2024. Our featured session, “How to Raise in 2025 if You’ve Taken a Flat, Down,…

Elevate your 2025 fundraising strategy at Disrupt 2024

Remote access giant TeamViewer says Russian spies hacked its corporate network

The remote access giant linked the cyberattack to government-backed hackers working for Russian intelligence, known as APT29.

Remote access giant TeamViewer says Russian spies hacked its corporate network

Here are the hottest product announcements from Apple, Google, Microsoft and others so far in 2024

We’ve poked through the many product announcements made by the biggest tech companies and product trade shows of the year, so far, and compiled them into this list.

Here are the hottest product announcements from Apple, Google, Microsoft and others so far in 2024

Feather raises €6M to go Pan-European with its insurance platform for expats

As a foreigner, navigating health insurance systems can often be difficult. German startup Feather thinks it has a solution and raised €6 million to help some of the 40-plus million…

Feather raises €6M to go Pan-European with its insurance platform for expats

FactCheck.org

FactChecking the Biden-Trump Debate

In the first debate clash of the 2024 campaign, the two candidates unleashed a flurry of false and misleading statements.

By Robert Farley , Eugene Kiely , D'Angelo Gore , Jessica McDonald , Lori Robertson , Catalina Jaramillo , Saranac Hale Spencer and Alan Jaffe

Posted on June 28, 2024

The much-anticipated first debate of 2024 between President Joe Biden and former President Donald Trump featured a relentless barrage of false and misleading statements from the two candidates on immigration, the economy, abortion, taxes and more.

  • Both candidates erred on Social Security, with Biden incorrectly saying that Trump “wants to get rid” of the program, and Trump falsely alleging that Biden will “wipe out” Social Security due to the influx of people at the border.
  • Trump misleadingly claimed that he was “the one that got the insulin down for the seniors,” not Biden. Costs were lowered for some under a limited project by the Trump administration. Biden signed a law capping costs for all seniors with Medicare drug coverage.
  • Trump warned that Biden “wants to raise your taxes by four times,” but Biden has not proposed anything like that. Trump was also mostly wrong when he said Biden “wants the Trump tax cuts to expire.” Biden said he would extend them for anyone making under $400,000 a year.
  • Biden repeated his misleading claim that billionaires pay an average federal tax rate of 8%. That White House calculation factors in earnings on unsold stock as income.
  • Trump repeated his false claim that “everybody,” including all legal scholars, wanted to end Roe v. Wade’s constitutional right to abortion.
  • Trump falsely claimed that “the only jobs” Biden “created are for illegal immigrants and bounced back jobs that bounced back from the COVID.” Total nonfarm employment is higher than it was before the pandemic, as is the employment level of native-born workers.
  • Biden claimed that Trump oversaw the “largest deficit of any president,” while Trump countered that “we now have the largest deficit” under Biden. The largest budget deficit was under Trump in fiscal year 2020, but that was largely because of emergency spending due to COVID-19.
  • Biden misleadingly said that “Black unemployment is the lowest level it has been in a long, long time.” The rate reached a record low in April 2023, and it was low under Trump, too, until the pandemic.
  • Biden said Trump called U.S. veterans killed in World War I “suckers and losers,” which Trump called a “made up quote.” The Atlantic reported that, based on anonymous sources. A former Trump chief of staff later seemed to confirm Trump said it.
  • Trump claimed that Biden “caused the inflation,” but economists say rising inflation was mostly due to disruptions to the economy caused by the pandemic.
  • Trump grossly inflated the number of immigrants who have entered the country during the Biden administration — putting the number at 18 million to 20 million — and he said, without evidence, that many of them are from prisons and mental institutions.
  • Trump claimed that “we had the safest border in history” in the “final months” of his presidency. But apprehensions of those trying to cross illegally in the last three full months of his presidency were about 50% higher than in the three months before he took office.
  • Biden criticized Trump for presiding over a loss of jobs when he was president, but that loss occurred because of the COVID-19 pandemic.
  • Trump falsely claimed that “some states” run by Democrats allow abortions “after birth.” If it happened, it would be homicide, and that’s illegal.
  • Trump made the unsupported claim that the U.S. border with Mexico is “the most dangerous place in the world,” and suggested that it has opened the country to a violent crime wave. The data show a reduction in violent crime in the U.S.
  • Trump overstated how much food prices have risen due to inflation. Prices are up by about 20%, not double or quadruple. 
  • Trump boasted his administration “had the best environmental numbers ever.” Trump reversed nearly 100 environmental rules limiting pollution. Although greenhouse gas emissions did decline from 2019 to 2020, the EPA said that was due to the impacts of the pandemic on travel and the economy.   
  • Biden said he joined the Paris Agreement because “if we reach the 1.5 degrees Celsius, and then … there’s no way back.” Limiting global warming to 1.5 degrees would reduce the damages and losses of global warming, but scientists agree that climate action is still possible after passing the threshold.
  • Trump said immigrants crossing the border illegally were living in “luxury hotels.” New York City has provided hotel and motel rooms to migrant families, but there is no evidence that they are being placed in “luxury” hotels. 
  • Trump falsely claimed that there was “no terrorism, at all” in the U.S. during his administration. There were several terrorist acts carried out by foreign-born individuals when he was president.
  • While talking about international trade, Trump falsely claimed that the U.S. currently has “the largest deficit with China.” In 2023, the trade deficit in goods and services with China was the lowest it has been since 2009.
  • Trump wrongly claimed that prior to the pandemic, he had created “the greatest economy in the history of our country.” That’s far from true using economists’ preferred measure — growth in gross domestic product.
  • As he has many times before, Trump wrongly claimed, “I gave you the largest tax cut in history.” That’s not true either as a percentage of gross domestic product or in inflation-adjusted dollars.
  • Trump contrasted his administration with Biden’s by misleadingly noting that when he left office, the U.S. was “energy independent.” The U.S. continues to export more energy than it imports.

The debate was hosted by CNN in Atlanta on June 27.

Social Security

Biden claimed that Trump “wants to get rid” of Social Security, even though the former president has consistently said he will not cut the program and has advised Republicans against doing so.

will ai help the world or hurt it essay

Earlier this year, Biden and his campaign based the claim on Trump saying in a  March 11 CNBC interview  that “there is a lot you can do in terms of entitlements in terms of cutting and in terms of also the theft and the bad management of entitlements.” As  we’ve said , in context, instead of reducing benefits, Trump was talking about cutting waste and fraud in those programs — although there’s not enough of that to make the program solvent over the long term.

“I will never do anything that will jeopardize or hurt Social Security or Medicare,” Trump later said in a  March 13 Breitbart interview . “We’ll have to do it elsewhere. But we’re not going to do anything to hurt them.”

During the GOP presidential primary, Trump also  criticized  some of his Republican opponents for proposing to raise the retirement age for Social Security, which budget experts  have said  would reduce scheduled benefits for those affected.

Some critics of Trump have  argued  that he cannot be expected to keep his promise because of his past budget proposals. But,  as we’ve written , Trump did not propose cuts to Social Security retirement benefits.

Meanwhile, Trump claimed during the debate that Biden “is going to single handedly destroy Social Security” because of illegal immigration. “These millions and millions of people coming in, they’re trying to put them on Social Security. He will wipe out Social Security,” Trump said of Biden.

As  we  and  others  have explained before, immigrants who are not authorized to be in the U.S. aren’t eligible for Social Security. In fact, because many such individuals pay into Social Security via payroll taxes but cannot receive benefits, illegal immigrants bolster rather than drain the finances of the program.

In referring to what seniors pay for insulin, Trump misleadingly claimed, “I heard him say before ‘insulin.’ I’m the one that got the insulin down for the seniors. I took care of the seniors.” Insulin costs went down for some beneficiaries under a limited project under Trump; Biden signed a more expansive law affecting all seniors with Medicare drug coverage.

Under Trump, out-of-pocket costs were lowered to $35 for some Medicare Part D beneficiaries under a two-year pilot project in which some insurers could voluntarily reduce the cost for some insulin products. KFF, a nonpartisan health policy research organization,  explained  earlier this month that under this model, in effect from 2021 to 2023, “participating Medicare Part D prescription drug plans covered at least one of each dosage form and type of insulin product at no more than $35 per month,” and “less than half of all Part D plans chose to participate in each year.”

But in 2022, Biden  signed a law  that required all Medicare prescription drug plans to cap all insulin products at $35. The law also capped the out-of-pocket price for insulin that’s covered under Medicare Part B, which covers drugs administered in a health care provider’s office. The caps went into effect last year.

STAT, a news site that covers health care issues,  reported  that the idea for a $35 cap for seniors initially came from Eli Lilly, the pharmaceutical company, which proposed it in 2019.

Trump on Biden Tax Plan

“He’s the only one I know he wants to raise your taxes by four times,” Trump said of Biden. “He wants to raise everybody’s taxes by four times. He wants the Trump tax cuts to expire. So everybody … [is] going to pay four to five times –  nobody ever heard of this before.”

Trump regularly warns of massive tax hikes for “everybody,” should Biden be reelected. That doesn’t jibe with anything Biden has proposed.

In his more than three years as president, Biden’s  major tax changes  have included setting a  minimum corporate tax rate  of 15% and lowering taxes for some families by  expanding the child tax credit  and, for a time, making it fully refundable, meaning families could still receive a refund even if they no longer owe additional taxes.

As  we wrote  in 2020, when Trump made a similar claim, Biden proposed during that campaign to raise an additional $4 trillion in taxes over the next decade, although the increases would have fallen mainly on very high-income earners and corporations. The plan would not have doubled or tripled people’s taxes at any income level (on average), according to analyses of Biden’s plan by the  Penn Wharton Budget Model ,  the Tax Policy Center  and  the Tax Foundation .

In March 2023, the TPC’s Howard Gleckman  wrote  that Biden proposed a 2024 budget that would, on average, increase after-tax incomes for low-income households and “leave them effectively unchanged for middle-income households.” The Tax Policy Center noted, “The top 1 percent, with at least roughly $1 million in income, would pay an average of $300,000 more than under current law, dropping their after-tax incomes by 14 percent.”

This March, Biden released his  fiscal year 2025 budget , which contains many of the same proposals and adds a few new wrinkles. But it still  does not contain  any “colossal tax hikes” on typical American families, as Trump has said.

Biden’s latest plan proposes — as he has in the past — to increase the corporate income tax rate from 21% to 28%, and to  restore  the top individual tax rate of 39.6% from the current rate of 37%. It would also increase the corporate minimum tax rate from 15% to 21% for companies that report average profits in excess of $1 billion over a three-year period. And the plan would impose a 25% minimum tax on very wealthy individuals. The plan also proposes to extend the expanded child tax credit enacted in the American Rescue Plan through 2025, and to make the child tax credit fully refundable on a permanent basis.

Trump is also mostly wrong that Biden “wants the Trump tax cuts to expire.”

As he has said since the 2020 campaign, Biden’s FY 2025 budget vows not to increase taxes on people earning less than $400,000.

In order to keep that pledge, Biden would have to extend most of the individual income tax provisions enacted in the Tax Cuts and Jobs Act that are set to expire at the end of 2025. And that’s what Biden says he would do — but  only for  individual filers earning less than $400,000 and married couples making less than $450,000. (In order to pass the TCJA with a simple Senate majority, Republicans wrote the law to have most of the individual income tax changes  expire after 2025 .)

The Biden budget plan “would raise marginal income tax rates faced by higher earners and corporations while expanding tax credits for lower-income households,” according to a Tax Foundation  analysis  of the tax provisions in Biden’s budget. “The budget would redistribute income from high earners to low earners. The bottom 60 percent of earners would see increases in after-tax income in 2025, while the top 40 percent of earners would see decreases.”

Biden on Taxes Paid by Billionaires

In arguing that wealthy households should pay a minimum tax, Biden repeated his misleading claim that billionaires pay an average federal tax rate of 8%.

“We have a thousand … billionaires in America, and what’s happening?”  Biden said . “They’re in a situation where they in fact pay 8.2% in taxes.”

That’s not the average rate in the current tax system; it’s a figure  calculated  by the White House and factors in earnings on unsold stock as income. When only considering income, the top-earning taxpayers, on average, pay higher tax rates than those in lower income groups, as  we’ve written  before.

The top 0.1% of earners pay an average rate of 25.1% in federal income and payroll taxes,  according to  an analysis by the Tax Policy Center in October 2022 for the 2023 tax year.

The point that Biden tried to make is that earnings on assets, such as stock, currently are not taxed until that asset is sold, which is when the earnings become subject to capital gains taxes. Until stocks and assets are sold, the earnings are referred to as “unrealized” gains. Unrealized gains, the White House  has argued , could go untaxed forever if wealthy people hold on to them and transfer them on to heirs when they die.

Roe v. Wade

As he has  before , Trump wildly exaggerated the popularity of ending Roe v. Wade — even going so far as to claim that it was “something that everybody wanted.”

“51 years ago, you had Roe v. Wade and everybody wanted to get it back to the states,”  he said , referring to the 1973 Supreme Court ruling that established a constitutional right to abortion, which was  overturned  in 2022.

Trump:  Everybody, without exception: Democrats, Republicans, liberals, conservatives. Everybody wanted it back — religious leaders. And what I did is I put three great Supreme Court justices on the court and they happened to vote in favor of killing Roe v. Wade, and moving it back to the states. This is something that everybody wanted. Now 10 years ago or so they started talking about how many weeks and how many this and getting into other things. But every legal scholar throughout the world — the most respected — wanted it brought back to the states. I did that.

In fact, a majority of Americans have disagreed with ending Roe v. Wade, including plenty of legal scholars, as we’ve explained  before . While some scholars criticized aspects of the legal reasoning in Roe, it did not necessarily mean they wanted the ruling overturned. Legal experts told us that Trump’s claim was “utter nonsense” and “patently absurd.”

Trump Wrong on Jobs

After Biden talked about job creation during his administration, Trump falsely claimed that “the only jobs [Biden] created are for illegal immigrants and bounced back jobs that bounced back from the COVID.”

In fact, as of May,  total nonfarm employment  in the U.S. had gone up about 6.2 million from the pre-pandemic peak in February 2020, according to figures from the Bureau of Labor Statistics. The increase is about 15.6 million if you count from when Biden took office in January 2021 until now — but that would include some jobs that were temporarily lost during the pandemic and then came back during the economic recovery.

Furthermore, there is no evidence that only “illegal immigrants” have seen employment gains.

Since Biden became president in January 2021, employment of U.S.-born workers has increased more than employment of foreign-born workers, a category that includes anyone who wasn’t a U.S. citizen at birth, as we’ve written before . BLS says the  foreign-born  population includes “legally-admitted immigrants, refugees, temporary residents such as students and temporary workers, and undocumented immigrants.” There is no employment breakdown for just people in the U.S. illegally.

In looking at employment since the pre-pandemic peak, the employment level of  foreign-born workers  was up by about 3.2 million, from roughly 27.7 million in February 2020 to nearly 30.9 million in May. Employment for the  U.S.-born population  increased by about 125,000 — from nearly 130.3 million in February 2020 to 130.4 million, as of May.

Conflicting Budget Deficit Claims

Biden and Trump accused each other of presiding over the largest budget deficit in the U.S.

After talking about Trump’s plans for additional tax cuts, Biden said Trump already had the “largest deficit of any president in American history.” When he got a chance to respond, Trump said, “We now have the largest deficit in the history of our country under this guy,” referring to Biden.

Biden is correct: The  largest budget deficit  on record was about $3.1 trillion in fiscal year 2020 under Trump. However, that was  primarily  because of trillions of dollars in emergency funding that both congressional Republicans and Democrats approved to address the COVID-19 pandemic. Before the pandemic, the largest budget deficit under Trump was about $1 trillion in fiscal 2019.

Meanwhile, the most recent budget deficit under Biden was about $1.7 trillion in fiscal 2023. As of June, the nonpartisan Congressional Budget Office  projected  that the deficit for fiscal 2024, which ends on Sept. 30, would be about $2 trillion.

Black Unemployment

Biden boasted that on his watch, “Black unemployment is the lowest level it has been in a long, long time.”

It’s true that the unemployment rate for Black or African American people reached a record low of 4.8% in April 2023, but it is currently 6.1%,  according to  the Bureau of Labor Statistics, which has data going back to 1972.

Also, the unemployment rate was low under Trump, too, until the pandemic.

Under Trump, the  unemployment rate for Black Americans  went down to 5.3% in August 2019 – the lowest on record at that time. It shot up to 16.9% in April 2020, when the economic effects of the pandemic took hold. When Trump left office in January 2021, amid the pandemic, the rate was 9.3%.

The rate has been 6% or less in only 29 months since 1972, and it happened only under two presidents: 21 times under Biden and eight times under Trump.

‘Suckers and Losers’

Biden  said  Trump called U.S. veterans killed in World War I “suckers and losers,” which Trump called a “made up quote … that was in a third-rate magazine.”

It was first reported by a magazine — the Atlantic — but Trump’s former chief of staff,  John F. Kelly , a retired four-star Marine general, later seemed to confirm it.

Biden was referring to a trip Trump made to France in November 2018, where he reportedly declined to visit the  Aisne-Marne American Cemetery  near the location of the Battle of Belleau Wood. “He was standing with his four-star general and he told him, ‘I don’t want to go in there because they’re a bunch of losers and suckers.’”

The Atlantic  wrote  about this alleged incident in 2020, citing unnamed sources. The magazine wrote that Trump made his remark about “losers” when he declined to visit the Aisne-Marne American Cemetery, and his remark about “suckers” during that same trip.

The Atlantic, Sept. 3, 2020:  In a conversation with senior staff members on the morning of the scheduled visit, Trump said, “Why should I go to that cemetery? It’s filled with losers.” In a separate conversation on the same trip, Trump referred to the more than 1,800 marines who lost their lives at Belleau Wood as “suckers” for getting killed.

In October 2023, Kelly – who was on that trip and visited the Aisne-Marne Cemetery — gave a  statement to CNN  that seemed to confirm those remarks. CNN published Kelly’s statement.

CNN, Oct. 3, 2023:  “What can I add that has not already been said?” Kelly said, when asked if he wanted to weigh in on his former boss in light of recent comments made by other former Trump officials. “A person that thinks those who defend their country in uniform, or are shot down or seriously wounded in combat, or spend years being tortured as POWs are all ‘suckers’ because ‘there is nothing in it for them.’ A person that did not want to be seen in the presence of military amputees because ‘it doesn’t look good for me.’ A person who demonstrated open contempt for a Gold Star family – for all Gold Star families – on TV during the 2016 campaign, and rants that our most precious heroes who gave their lives in America’s defense are ‘losers’ and wouldn’t visit their graves in France.”

Trump said, “We had 19 people who said I didn’t say it.” One of those who said that he didn’t hear Trump make those remarks is John Bolton, Trump’s former national security adviser who was also on the trip and said he was there when the decision was made not to visit the cemetery.

“I didn’t hear that,” Bolton  told the New York Times  in 2020 after the magazine story first appeared. “I’m not saying he didn’t say them later in the day or another time, but I was there for that discussion.”

Biden Misleads on Jobs

Biden ignored the economic impact of the COVID-19 pandemic when he criticized Trump for employment going down over Trump’s time in office.

“He’s the only president other than Herbert Hoover that lost more jobs than he had when he began,” Biden said.

Job growth during Trump’s term was positive until the economy lost 20.5 million jobs in April 2020, as efforts to slow the spread of the novel coronavirus led to business closures and layoffs. By the time Trump left office in January 2021, employment had partly rebounded, but was still 9.4 million jobs below the February 2020 peak,  according to the Bureau of Labor Statistics .

Trump repeatedly claimed that Biden “caused the inflation” and that “I gave him a country with no essentially no inflation. It was perfect. It was so good.”

It’s true that inflation was relatively modest when Trump was president. The  Consumer Price Index rose 7.6%  under Trump’s four years — continuing a long period of low inflation. And inflation has been high over the entirety of Biden’s time in office. The  Consumer Price Index  for all items rose 19.3% between January 2021 and May.

For a time, it was the worst inflation in decades. The 12 months ending in June 2022 saw a 9% increase in the CPI (before seasonal adjustment), which the  Bureau of Labor Statistics said  was the biggest such increase since the 12 months ending in November 1981.

Inflation has moderated more recently. The CPI  rose  3.3% in the 12 months ending in May, the most recent figure available.

Although Trump claims that Biden is entirely responsible for massive inflation, economists  we have spoken to  say Biden’s policies are only partly to blame. The economists placed the lion’s share of the blame for inflation on disruptions to the economy caused by the pandemic, including supply shortages, labor issues and increased consumer spending on goods. Inflation was then worsened by Russia’s attack on Ukraine, which drove up oil and gas prices, experts told us.

Indeed, inflation has been a  worldwide problem  post-pandemic.

However, many economists say Biden’s policies — particularly aggressive stimulus spending early in his presidency to offset some of the economic damage caused by the pandemic — played a modest role.

Jason Furman , a former economic adviser to President Barack Obama and now a Harvard University professor, told us in June 2022 that he estimated about 1 to 4 percentage points worth of the inflation was due to Biden’s stimulus spending in the  American Rescue Plan  — a $1.9 trillion pandemic relief measure that included $1,400 checks to most Americans; expanded unemployment benefits; and money for schools, small businesses and states.  Mark Zandi , chief economist of Moody’s — whose work is often cited by the White House — said the impact of the stimulus measure now “has largely faded.”

Economists note that the American Rescue Plan came after two other pandemic stimulus laws enacted under Trump that were  worth  a  total  of $3.1 trillion. That spending, too, could have contributed to inflation.

Immigrants Entering U.S. Under Biden

Trump grossly inflated the number of immigrants who have entered the country during the Biden administration — putting the number at 18 million to 20 million. The number, by our calculation, is about a third of that. Trump also claimed, without evidence, that many of those immigrants are from prisons and mental institutions.

“It could be 18, it could be 19, and even 20 million people,” Trump said of the immigrants who have entered the U.S. during the Biden administration. Later in the debate, Trump asked Biden why there had been no accountability “for allowing 18 million people many from prisons, many from mental institutions” into the country.

That’s a greatly exaggerated number. We took a deep dive into the immigration numbers  in February , and again in  mid-June , and we came up with an estimate of at most a third of Trump’s number.

Here’s the breakdown:

Department of Homeland Security data show nearly 8 million encounters at the U.S.-Mexico border between February 2021, the month after Biden took office, and May, the last month of available  statistics . That’s a figure that includes both the 6.9 million apprehensions of migrants caught between legal ports of entry – the number typically used for illegal immigration – and nearly 1.1 million encounters of migrants who arrived at ports of entry without authorization to enter the U.S.

DHS also has comprehensive data, through February, of the initial processing of these encounters. That information shows 2.9 million were removed by Customs and Border Protection and 3.2 million were released with notices to appear in immigration court or report to Immigration and Customs Enforcement in the future, or other classifications, such as parole. (Encounters do not represent the total number of people, because some people attempt multiple crossings. For example, the recidivism rate was 27% in fiscal year 2021,  according to the most recent figures  from CBP.) 

As  we’ve explained before , there are also estimates for “gotaways,” or migrants who crossed the border illegally and evaded the authorities. Based on an average annual apprehension rate of 78%, which DHS provided to us, that would mean there were an estimated 1.8 million gotaways from February 2021 to February 2024. The gotaways plus those released with court notices or other designations would total about 5 million.

There were also 407,500 transfers of unaccompanied children to the Department of Health and Human Services and 883,000 transfers to ICE. The ICE transfers include those who are then booked into ICE custody, enrolled in “ alternatives to detention ” (which include technological monitoring) or released by ICE. We don’t know how many of those were released into the country with a court notice. But even if we include those figures, it still doesn’t get us to anywhere near 18 to 20 million.

And we should note that these figures do not reflect whether a migrant may ultimately be allowed to stay or will be deported, particularly since there is a yearslong backlog of immigration court cases.

Also, as we have  written   repeatedly , Trump has provided no credible support for his incendiary claim that countries are emptying their prisons and mental institutions and sending those people to the U.S. Experts tell us they have seen no evidence to substantiate it.

Earlier this month, we looked into  Trump’s claim as it relates to Venezuela, because Trump has repeatedly cited a drop in crime there to support his claim about countries emptying their prisons and sending inmates to the U.S. Reported crime is trending down in Venezuela, but crime experts in the country say there are numerous reasons for that — including an enormous out-migration of citizens and a consolidation of gang activity — and they have nothing to do with sending criminals to the U.S.

“We have no evidence that the Venezuelan government is emptying the prisons or mental hospitals to send them out of the country, whether to the USA or any other country,” Roberto Briceño-León, founder and director of the independent Venezuelan Observatory of Violence, told us.

Border Under Trump

Trump claimed that “we had the safest border in history” in the “final months” of his presidency, according to Border Patrol. But according to  data  provided by Customs and Border Protection, apprehensions of those trying to cross illegally into the U.S. in the last three full months of Trump’s presidency were about 50% higher than in the  three months  before he took office.

In fact, as we wrote in our piece, “ Trump’s Final Numbers ,” illegal border crossings, as measured by  apprehensions at the southwest border , were 14.7% higher in Trump’s final year in office compared with the last full year before he was sworn in.

But these statistics tell only part of the story. The number of apprehensions fluctuated wildly during Trump’s presidency, from a  monthly  low of 11,127 in April 2017 to a high of 132,856 in May 2019.

Back in April,  we wrote  about a misleading chart that Trump showed to the crowd during a speech in Green Bay, Wisconsin. “See the arrow on the bottom? That was my last week in office,” Trump said. “That was the lowest number in history.” But Trump was wrong on both points.

The arrow was pointing to apprehensions in April 2020, when apprehensions plummeted during the height of the pandemic.

“The pandemic was responsible for a near-complete halt to all forms of global mobility in 2020, due to a combination of border restrictions imposed by countries around the world,”  Michelle Mittelstadt , director of communications for the Migration Policy Institute, told us.

After apprehensions reached a pandemic low in April 2020, they rose every month after that. In his last months in office, apprehensions had more than quadrupled from that pandemic low and were higher than the month he took office.

Trump falsely claimed that “some states” run by Democrats allow abortions “after birth.” As  we have written , that’s simply false. If it happened, it would be  homicide , and that’s  illegal .

“No such procedure exists,” the American College of Obstetricians and Gynecologists  says  on its website.

The former president  has wrongly said  that abortions after birth were permitted under Roe v. Wade — the Supreme Court ruling that established a constitutional right to abortion until it was  reversed  in 2022. It was not.

Under Roe, states could outlaw abortion after fetal viability, but with exceptions for risks to the life or health of the mother. Many Republicans  have objected  to the health stipulation, saying it would allow abortion for any reason. Democrats say exceptions are needed to protect the mother from medical risks. We should note, late-term abortions  are rare . According to the  Centers for Disease Control and Prevention , less than 1% of abortions in the U.S. in 2020 were performed after 21 weeks gestational time.

In June 2022, after Trump had appointed three conservative justices to the Supreme Court, the court  overturned  Roe in a 5-4 ruling. Biden  supports  restoring Roe as “the law of the land,” as he said in his State of the Union address in March.

Trump Calls Border ‘The Most Dangerous Place’

In his focus on the U.S. border with Mexico, Trump  made  the unsupported claim that it is “the most dangerous place in the world.”

It’s true that unauthorized border crossings  can be dangerous  — 895 people died while doing so in fiscal year 2022, which is the most recent year for which the Customs and Border Protection has  data . Most of those deaths were heat related.

And the International Organization for Migration called calendar year 2022 “the deadliest year on record” for migration in the Americas, with a total of 1,457 fatalities throughout South America, Central America, North America and the Caribbean. The organization began tracking deaths and disappearances related to migration in 2014.

“Most of these fatalities are related to the lack of options for safe and regular mobility, which increases the likelihood that people see no other choice but to opt for irregular migration routes that put their lives at risk,” the organization said in its  2022 report .

Trump suggested that the border crossings imperil Americans when he went on to say, “these killers are coming into our country, and they are raping and killing women.”

But, as  we’ve written before , FBI data show a downward trend in violent crime in the U.S., and there’s no evidence to support the claim that there’s been a crime wave driven by immigrants.

Crime analyst Jeff Asher, co-founder of the New Orleans firm  AH Datalytics , told us in May that there’s no evidence in the data to indicate a migrant crime wave.

Similarly, Jeffrey Butts, director of the Research and Evaluation Center at the John Jay College of Criminal Justice,  told the New York Times  in February there was no evidence of a migrant crime wave in New York City after Texas Gov. Greg Abbott began busing migrants there in April 2022.

“I would interpret a ‘wave’ to mean something significant, meaningful and a departure from the norm,” Butts said at the time. “So far, what we have are individual incidents of crime.”

Also, it’s worth noting that the Institute for Economics and Peace’s  Global Peace Index  — which measures the safety of 163 countries based on 23 indicators, including violent crime, deaths from internal conflict and terrorism — said the “least peaceful country” is Afghanistan, followed by Yemen, Syria, South Sudan and the Democratic Republic of the Congo.

In discussing inflation, the former president embellished the degree to which food prices have increased.

“It’s killing people. They can’t buy groceries anymore,” Trump said. “You look at the cost of food, where it’s doubled, tripled and quadrupled. They can’t live.”

According to the Bureau of Labor Statistics, the Consumer Price Index for food has  gone up 17.5%  — not 100% to 300% — since January 2021. The Consumer Price Index specifically for groceries, or “food at home,” has  risen 20.8% .

Climate Change

During a short exchange about climate change, Trump boasted that during his tenure “we had the best environmental numbers ever.” It is not clear what he was referring to exactly, but he said if elected president he wanted to have “absolutely immaculate clean water and I want absolutely clean air — and we had it.” He might have been referring to a talking point that Andrew Wheeler, Trump’s former Environmental Protection Agency administrator, had recommended Trump mention during the debate: “CO2 emissions went down” during his administration, as  the Hill reported . 

Greenhouse gas emissions, which are responsible for global warming,  did decline  from 2019 to 2020. But that was “largely due to the impacts of the coronavirus (COVID-19) pandemic on travel and economic activity,” according to the EPA. Emissions increased by 5.7% from 2020 to 2022, once the economy started getting reactivated again, the agency said. 

According to an  analysis by the New York Times , Trump’s administration reversed nearly 100 environmental rules, including 28 regulations on air pollution and emissions, and eight rules that limited water pollution. Reportedly, Trump  recently asked  oil executives and lobbyists to donate to his campaign, promising he would roll back other environmental rules that hurt fossil fuel interests. 

“He’s not done a damn thing for the environment,” Biden said in response, pointing out that Trump had  pulled the U.S. out of the Paris Agreement . “I immediately joined it because if we reach the 1.5 degrees Celsius … there’s no way back,” Biden said. 

As  we’ve reported , although reaching 1.5 degrees Celsius, or 2.7 degrees Fahrenheit, of warming comes with a number of very serious impacts, it is not a point of no return. Scientists agree that every increment of global warming increases these negative impacts, but 1.5 degrees is not a magic number after which everything is doomed, they say. 

Immigrants Living in Hotels

During the debate, Trump  mentioned   twice  that while immigrants crossing the border illegally were “living in luxury hotels,” in New York City and other cities “our veterans are living in the street.”

While it is true that New York City has  provided   hotel   rooms  to migrant families as a temporary shelter solution, there is no evidence that immigrants are being placed in “luxury” hotels. 

In 2023, Mayor Eric Adams  signed  a $275 million contract with the Hotel Association of New York City to house 5,000 migrants. The deal was intended to help  struggling hotels  impacted by the pandemic and did not expect to include luxury hotels. “There are no gold-plated rooms that are being given away contrary to any reports that you may have seen,” the association president  told NY1  at the time. In January, the city  signed  another $77 million contract to shelter migrant families in hotels. 

In April, social media posts falsely claimed immigrants had stormed New York City Hall to demand luxury hotel accommodations. But as the  Associated Press reported , the immigrants were there for a hearing about racial inequities in shelter and immigrant services. 

In 2023, the number of veterans experiencing homelessness increased 7.4% from 2022, according to  data  from the Department of Housing and Urban Development. But homelessness among veterans has been declining in recent years, with a 4% overall reduction within the last three years alone. 

Terrorist Attacks Under Trump

While talking about Iran and terrorism, Trump falsely claimed that “you had no terror, at all, during my administration.” As  we’ve written , there were several acts of terrorism carried out by foreign-born individuals when Trump was in office.

For example, in October 2017, Sayfullo Saipov  used  a truck to run down people in New York City. He killed eight people,  including  Americans and tourists, in an attack carried out on behalf of the Islamic State.

Then in December 2017, Akayed Ullah  detonated  a homemade pipe bomb he was wearing inside a New York City subway station. Ullah  told  authorities he did it in response to U.S. airstrikes against the Islamic State in Syria and other places.

Then in  December 2019 , Second Lt. Mohammed Saeed Alshamrani, a member of the Royal Saudi Air Force, shot 11 people at Florida’s Naval Air Station Pensacola, killing three U.S. sailors. Trump’s own attorney general, William Barr,  called  it an act of terrorism in January 2020. “The evidence shows that the shooter was motivated by jihadist ideology,” Barr said in a statement.

China Trade Deficit

When discussing U.S. trade relations with China, Trump said “we have the largest deficit with China.” That’s false, as  we’ve written .

In 2023, the U.S. had a trade deficit with China in goods and services of roughly $252 billion,  according to  revised figures the Bureau of Economic Analysis  released  in early June. The deficit in goods trading was about $279 billion which was partially offset by a roughly $27 billion surplus in the trading of  services  — which can include travel, transportation, finance and intellectual property.

The trade gap with China last year was the lowest it had been since 2009, when it was $220 billion.

In fact, according to BEA data going back to 1999, the highest total U.S.-China trade deficit in goods and services was about $378 billion in 2018 — when Trump was president. Under Biden, the highest trade deficit with China was $366 billion in 2022.

Not ‘Greatest Economy’ Under Trump

Trump falsely said that prior to the pandemic, the U.S. had “the greatest economy in the history of our country. … Everything was locked in good.”

Trump’s boast about creating the “greatest economy in history” is ubiquitous in his campaign speeches. And it’s not true, at least not by the objective measure typically used to gauge the health of the economy.

As  we have written , economists generally measure a nation’s health by the growth of its  inflation-adjusted gross domestic product . Under Trump, growth was modest. Real GDP in Trump’s four years grew annually by 2.5% in 2017, 3% in 2018 and 2.5% in 2019 — before the economy went into a tailspin during the pandemic in 2020, when real GDP declined by 2.2%,  according to  the Bureau of Economic Analysis.

So, in the best year under Trump, U.S. real GDP grew annually by 3%. By contrast, the nation’s economy grew at a faster annual rate  48 times  and under every president before and after Trump dating to 1930, except Barack Obama and Herbert Hoover. The economy grew at more than 3% six of Ronald Reagan’s eight years, including 7.2% in 1984, and it grew 5% or more 10 times under Franklin D. Roosevelt, including 18.9% in 1942.  Under Biden , the GDP grew by 5.8% in 2021 — a post COVID-19 bounce-back — by 1.9% in 2022 and 2.5% in 2023.

Trump’s Was Not Largest Tax Cut in History

As he has many times before, Trump wrongly claimed, “I gave you the largest tax cut in history.” But saying this over and over, as Trump has for years, doesn’t make it any more true.

As  we have been writing  even before the 2017  Tax Cuts and Jobs Act  was enacted into law, while the law provided tax relief to nearly all Americans, it was not the largest tax cut in U.S. history either as a percentage of gross domestic product (the measure preferred by economists) or in inflation-adjusted dollars.

According to a Tax Policy Center  analysis , the law reduced the individual income taxes owed by Americans by about $1,260 on average in 2018. It also reduced the top corporate tax rate from  35% to 21% , beginning in January 2018.

The law signed by Trump was initially projected to cost $1.49 trillion over 10 years,  according to the nonpartisan Joint Committee on Taxation . It could end up costing substantially more if individual tax provisions are extended past 2025. Over the first four years, the average annual cost was estimated to be $185 billion. That was about 0.9% of  gross domestic product  in 2018.

That’s nowhere close to President Ronald Reagan’s 1981 tax cut, which was 2.89% of GDP over a four-year average. That’s according to a  2013 Treasury Department analysis  on the revenue effects of major tax legislation. Five more tax measures since 1940 had an impact larger than 1% of GDP, and the Committee for a Responsible Federal Budget  includes  a 1921 measure as also being larger than the 2017 plan. That’s eighth place for Trump’s “biggest tax cut in our history.”

In inflation-adjusted dollars, the Trump-era tax cut is also less than the American Taxpayer Relief Act of 2012, which comes in at No. 1 with a $320.6 billion cost over a four-year average. And it’s less than tax reductions in 2010 ($210 billion) and 1981 ($208 billion).

Energy Independence

Trump boasted, as he  often does , that “on Jan. 6 [2021], we were energy independent,” implying that’s no longer the case under Biden. But by Trump’s definition, the country remains energy independent.

To be clear, under Trump, the U.S. never stopped  importing  sources of energy,  including crude oil , from other countries. What he likely means is that the country either  produced  more energy than it consumed, or  exported  more energy than it imported. During Trump’s presidency, after years trending in that direction, the U.S. did hit a tipping point where exports of primary energy exceeded energy imports from foreign sources in 2019 and 2020 — the first times that had happened since 1952,  according to  the U.S. Energy Information Administration. 

But contrary to Trump’s suggestion, that has continued in the Biden presidency. The U.S., during Biden’s presidency, has  exported  more energy,  including petroleum , than it imported, and it has  produced  more energy than it consumed. Also, the U.S. is producing record amounts of  oil  and  natural gas  under Biden.

Editor’s note: FactCheck.org does not accept advertising. We rely on grants and individual donations from people like you. Please consider a donation. Credit card donations may be made through  our “Donate” page . If you prefer to give by check, send to: FactCheck.org, Annenberg Public Policy Center, 202 S. 36th St., Philadelphia, PA 19104. 

Bureau of Labor Statistics. “ Unemployment Rate – Black or African American .” Data extracted 27 Jun 2024.

Robertson, Lori. “ Biden’s Tax Rate Comparison for Billionaires and Schoolteachers .” FactCheck.org. 16 Feb 2023.

“ Average Effective Federal Tax Rates – All Tax Units, By Expanded Cash Income Percentile, 2023 .” Tax Policy Center. 14 Oct 2022.

Goldberg, Jeffrey. “ Trump: Americans Who Died in War Are ‘Losers’ and ‘Suckers’. ” 3 Sep 2020.

Baker, Peter and Maggie Haberman. “ Trump Faces Uproar Over Reported Remarks Disparaging Fallen Soldiers .” 4 Sep 2020.

Tapper, Jake. “ Exclusive: John Kelly goes on the record to confirm several disturbing stories about Trump .” CNN. 3 Oct 2023.

Leiserson, Greg and Danny Yagan. “ What Is the Average Federal Individual Income Tax Rate on the Wealthiest Americans? ” White House. 23 Sep 2021.

Budryk, Zack. “ Trump posts climate talking points online before debate with Biden ”. The Hill. 27 Jun 2024. 

“ Climate Change Indicators: U.S. Greenhouse Gas Emissions .” EPA. Updated 27 Jun 2024. 

Popovich, Nadja, et al. “ The Trump Administration Rolled Back More Than 100 Environmental Rules. Here’s the Full List. ” The New York Times. 20 Jan 2021. 

Friedman,Lisa, et al. “ At a Dinner, Trump Assailed Climate Rules and Asked $1 Billion From Big Oil. ” The New York Times. 9 May 2024. 

McGrath, Matt. “ Climate change: US formally withdraws from Paris agreement .” BBC. 4 Nov 2020.

Jaramillo, Catalina. “ Warming Beyond 1.5 C Harmful, But Not a Point of No Return, as Biden Claims .” FactCheck.org. 27 Apr 2023. 

Zraick, Karen. “ How Manhattan Hotels Became Refuges for Thousands of Migrants .” New York Times. 23 Mar 2023.

Izaguirre, Anthony. “ New York City limiting migrant families with children to 60-day shelter stays to ease strain on city. ” AP. 16 Oct 2023.

Goldin, Melissa. “ No, immigrants did not storm New York City Hall in pursuit of luxury hotel rooms. ” 17 Apr 2024.

Lazar, David. “ Mayor signs $275 million deal with hotels to house migrants .” Spectrum News NY1. 15 Jan 2023. 

Nahmias, Laura and Fola Akinnibi. “ NYC Pays Over $300 a Night for Budget Hotel Rooms for Migrants .” Bloomberg. 9 Jun 2023. 

Adcroft, Patrick and Spectrum News Staff. “ New York City signs $77M contract with hotels to house migrant families .” Spectrum News. 24 Jan 2024. 

Diaz, Monica. “ Veteran homelessness increased by 7.4% in 2023. ” VA News. 15 Dec 2023.

Robertson, Lori. “ Trump’s False Claim About Roe .” FactCheck.org. 9 Apr 2024.

U.S. Bureau of Labor Statistics.  Consumer Price Index for All Urban Consumers: Food at Home in U.S. City Average . Retrieved from FRED, Federal Reserve Bank of St. Louis. Accessed 27 Jun 2024.

U.S. Bureau of Labor Statistics.  Consumer Price Index for All Urban Consumers: Food in U.S. City Average . Retrieved from FRED, Federal Reserve Bank of St. Louis. Accessed 27 Jun 2024.

Farley, Robert. “ Trump’s Comments About ‘Cutting’ Entitlements in Context .” FactCheck.org. 15 Mar 2024.

Jaffe, Alan. “ Posts Misrepresent Immigrants’ Eligibility for Social Security Numbers, Benefits .” FactCheck.org. 26 Apr 2024.

Kessler, Glenn. “ No, Donald Trump, migrants aren’t ‘killing’ Social Security and Medicare .” Washington Post. 26 Mar 2024.

Federal Reserve Bank of St. Louis.  All Employees, Total Nonfarm . Accessed 27 Jun 2024.

Federal Reserve Bank of St. Louis.  Employment Level – Foreign Born . Accessed 27 Jun 2024.

Federal Reserve Bank of St. Louis.  Employment Level – Native Born . Accessed 27 Jun 2024.

Robertson, Lori and D’Angelo Gore. “ FactChecking Trump’s Immigration-Related Claims in Phoenix and Las Vegas .” 17 June 2024.

Federal Reserve Bank of St. Louis.  Federal Surplus or Deficit . Accessed 27 Jun 2024.

Congressional Budget Office. “ An Update to the Budget and Economic Outlook: 2024 to 2034 .” Jun 2024.

Gore, D’Angelo and Robert Farley. “ FactChecking Trump’s Iowa Victory Speech .” 18 Jan 2024.

U.S. Department of Justice, Office of Public Affairs. “ Sayfullo Saipov Charged With Terrorism and Murder in Aid of Racketeering in Connection With Lower Manhattan Truck Attack .” Press release. 21 Nov 2017.

U.S. Attorneys Office, Southern District of New York. “ Akayed Ullah Sentenced To Life In Prison For Bombing New York City Subway Station In 2017 On Behalf Of ISIS .” Press release. 22 Apr 2021.

LaForgia, Michael and Eric Schmitt. “ The Lapses That Let a Saudi Extremist Shoot Up a U.S. Navy Base .” New York Times. 21 Jun 2020.

Robertson, Lori. “ Familiar Claims in a Familiar Presidential Race .” FactCheck.org. 11 Apr 2024.

Cybersecurity and Infrastructure Security Agency. “ Joint Statement from Elections Infrastructure Government Coordinating Council & the Election Infrastructure Sector Coordinating Executive Committees .” 12 Nov 2020.

Cummings, William, Garrison, Joey and Sergent, Jim. “ By the numbers: President Donald Trump’s failed efforts to overturn the election .” USA Today. 06 Jan 2021.

Election Law at Ohio State. “ Major Pending Election Cases .” Accessed 28 Jun 2024.

GovInfo.gov.  Transcript of hearing before the House Select Committee to Investigate the January 6th Attack on the United States Capitol.  13 Jun 2022.

Kiely, Eugene. “ Trump Ignored Aides, Repeated False Fraud Claims .” FactCheck.org. 14 Jun 2022.

Robertson, Lori. “ Breaking Down the Immigration Figures. ” FactCheck.org. 27 Feb 2024.

U.S. Customs and Border Protection.  Southwest Land Border Encounters.  Accessed 28 Jun 2024.

Department of Homeland Security. “ Alternatives to Detention .” Accessed 28 Jun 2024.

Farley, Robert. “ Trump’s Unfounded ‘Colossal’ Tax Hike Warning .” FactCheck.org. 17 Apr 2024.

Penn Wharton Budget Model. “ The Updated Biden Tax Plan .” 10 Mar 2020.

Tax Policy Center. “ An Analysis of Former Vice President Biden’s Tax Proposals .” 05 Mar 2020.

Watson, Garrett, and Li, Huaqun. “ Details and Analysis of President Joe Biden’s Campaign Tax Plan .” Tax Foundation. 22 Oct 2020.

White House Website.  Biden’s Proposed Fiscal Year 2025 Budget . Accessed 28 Jun 2024.

Kiely, Eugene. “ A Guide to the Tax Changes .” FactCheck.org. 20 Dec 2017.

Tax Foundation. “ Details and Analysis of President Biden’s Fiscal Year 2025 Budget Proposal. ” 21 Jun 2024.

Congress.gov.  Tax Cuts and Jobs Act.  Introduced 20 Dec 2017.

Joint Committee on Taxation. “ Estimated Revenue Effects Of H.R. 1, The ‘Tax Cuts And Jobs Act.’ ” 06 Nov. 2017.

Gambino, Lauren, et al. “ The unprecedented situation at the US-Mexico border – visualized .” Guardian. 7 Feb 2024.

U.S. Customs and Border Protection.  Border Rescues and Mortality Data . Updated 29 Mar 2024.

International Organization for Migration.  The Americas — Annual Regional Overview . 2022.

Farley, Robert. “ Trump’s Bogus Attack on FBI Crime Statistics .” FactCheck.org. 3 Mar 2024.

Institute for Economics & Peace.  Global Peace Index 2023 . June 2023.

For the claim about Trump and the national debt:

Fiscal Data.  Debt to the Penny . fiscaldata.treasury.gov. Updated 27 Jun 2024.

Treasury Direct.  FAQs About the Public Debt . Accessed 27 Jun 2024.

Robertson, Lori. “ Biden Leaves Misleading Impression on U.S. Debt .” FactCheck.org. 13 Aug 2021.

Congressional Budget Office. “ The Budget and Economic Outlook: 2017 TO 2027 .” Jan 2017.

Cubanski, Juliette and Tricia Neuman. “ The Facts About the $35 Insulin Copay Cap in Medicare .” KFF. 12 Jun 2024.

IMAGES

  1. 📗 Artificial Intelligence Essay Sample

    will ai help the world or hurt it essay

  2. Artificial Intelligence Essay

    will ai help the world or hurt it essay

  3. Exploring the Impact of Artificial Intelligence on Society: Will AI

    will ai help the world or hurt it essay

  4. Essay on Artificial Intelligence

    will ai help the world or hurt it essay

  5. Will AI help the world or harm it?

    will ai help the world or hurt it essay

  6. What is Artificial Intelligence Free Essay Example

    will ai help the world or hurt it essay

VIDEO

  1. What does the future hold for AI?

  2. Decoding AI

  3. نفرت ختم کریں، محبت دکھائیں

  4. Will AI run the world better than humans?

  5. Will AI help or harm humanity?

  6. 5 SHOCKING Dangers of AI for Humanity

COMMENTS

  1. Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not

    A 2023 survey of AI experts found that 36 percent fear that AI development may result in a "nuclear-level catastrophe.". Almost 28,000 people have signed on to an open letter written by the ...

  2. Why AI Will Save the World

    Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it. First, a short description of what AI is: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other ...

  3. Artificial intelligence is transforming our world

    Endnotes. This problem becomes even larger when we try to imagine how a future with a human-level AI might play out. Any particular scenario will not only involve the idea that this powerful AI exists, but a whole range of additional assumptions about the future context in which this happens. It is therefore hard to communicate a scenario of a world with human-level AI that does not sound ...

  4. Artificial Intelligence: The Helper or the Threat? Essay

    Essay. The principles of human intelligence have always been of certain interest for the field of science. Having understood the nature of processes that help people to reflect, scientists started proposing projects aimed at creating the machine that would be able to work like a human brain and make decisions as we do.

  5. How will AI change the world? 5 deep dives into the technology's ...

    NPR Explains. AI is a multi-billion dollar industry. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot ...

  6. The case that AI threatens humanity, explained in 500 words

    The case that AI threatens humanity, explained in 500 words. The short version of a big conversation about the dangers of emerging technology. Kelsey Piper is a senior writer at Future Perfect ...

  7. How Could AI Destroy Humanity?

    But they've been light on the details. Last month, hundreds of well-known people in the world of artificial intelligence signed an open letter warning that A.I. could one day destroy humanity ...

  8. This is how AI can help humanity

    AI can already detect early signs of diabetes from heart rate sensor data, help children with autism manage their emotions, and guide the visually impaired. If these innovations were widely available and used, the health and social benefits would be immense. In fact, our assessment concludes that AI technologies could accelerate progress on ...

  9. How artificial intelligence is transforming the world

    April 24, 2018. Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision ...

  10. Artificial Intelligence For Good: How AI Is Helping Humanity

    Bottom Line. Artificial intelligence has enormous potential to serve society, bringing more radical innovations for humans in the future. Its problem-solving ability could help people and ...

  11. The Future of AI: How AI Is Changing the World

    AI in Education. AI in education will change the way humans of all ages learn. AI's use of machine learning, natural language processing and facial recognition help digitize textbooks, detect plagiarism and gauge the emotions of students to help determine who's struggling or bored. Both presently and in the future, AI tailors the experience ...

  12. How Will AI Change War?

    If war plans and railway timetables caused World War I, perhaps AI will cause World War III. That AI will change warfare is undeniable. From enabling predictive maintenance of hardware to ...

  13. Can philosophy help us get a grip on the consequences of AI?

    Second, RLAIF itself demands deeper philosophical investigation. The basic idea is that the AI evaluator draws from a list of principles - a 'constitution' - in order to determine which of two completions is more compliant with it. The inventor and leading proponent of this approach is Anthropic, in their model Claude.

  14. 5 ways AI is doing good in the world right now

    Artificial intelligence is being used to help crack down on illegal activities. It is helping to tackle overfishing in our ocean. And it is even being used to address the issue of gender imbalance. In the wake of the pandemic, a growing number of businesses are speeding up plans to adopt AI and automation, according to the World Economic Forum ...

  15. 8 ways AI can help save the planet

    Distributed energy grids. AI can enhance the predictability of demand and supply for renewables across a distributed grid, improve energy storage, efficiency and load management, assist in the integration and reliability of renewables and enable dynamic pricing and trading, creating market incentives. 3. Smart agriculture and food systems.

  16. Artificial Intelligence and How It Changes the World: Argumentative Essay

    AI has made huge strides in the last 50 years which was when it was introduced into the world of computers. It has been used for many things such as games, learning, in the workforce, and even in driving now. AI is changing the world in many good ways and helping the world prosper into something great with new advancements and better lives.

  17. Is Artificial Intelligence Helping or Hurting Human Employment?

    Others argue that artificial intelligence will help create a brighter future for the workforce by creating more jobs. This camp argues that robots are being to replace jobs that aren't good or high-paying, to begin with, and the use of robots or AI agents for these kinds of jobs will empower and influence human employees to work towards ...

  18. Opinion

    To that end, I propose six maxims. The first is a famous quip attributed to the Carthaginian general Hannibal: "I shall either find a way or make one.". AI can help us find paths that we ...

  19. Artificial Intelligence Argumentative Essay

    Artificial Intelligence has revolutionized the way we approach security risks by offering advanced and intelligent monitoring systems. These systems can detect potential security threats through various mechanisms, including facial recognition, predictive analytics, and anomaly detection. It can also identify patterns and trends that humans ...

  20. ≡Essays on Artificial Intelligence: Top 10 Examples by

    Computer, Machine learning, Neural network, Patient, Radiology. 1 2 … 5. Our free essays on Artificial Intelligence can be used as a template for writing your own article. All samples were written by the best students 👩🏿‍🎓👨‍🎓 just for you.

  21. How Will Artificial Intelligence Affect Jobs 2024-2030

    You would have been living under a rock if you did not know how artificial intelligence is set to affect jobs in 2024-2030. AI like ChatGPT seems to be stealing all of the headlines at the moment, Google unveiled new AI software to build presentations, analyze and enter data, and write content, and there are so many more AI tools like Gamma and Numerous AI.

  22. AI will transform the character of warfare

    AI's deeper significance is what it can do before the drone strikes. Because it sorts through and processes data at superhuman speed, it can pluck every tank out of a thousand satellite images ...

  23. If AI is so good, why are there still so many jobs for translators?

    Why AI Didn't Kill The Translation Star (At Least Not Yet) For a more bullish take on AI, back to Duolingo CEO Luis von Ahn. Von Ahn, like many other technologists, sees AI ushering in a ...

  24. The Economist Who Figured Out What Makes Workers Tick

    Economist David Autor's work found that the introduction of the computer hurt the incomes of middle and working-class Americans. But he says AI is different. WSJ sits down with Autor to talk AI ...

  25. Biden's presidential debate performance sends Democrats into a panic

    "Biden just had to beat himself; unfortunately the stumbling and diminished Joe Biden the world has come to know made Trump look competent and energetic," said a former Trump campaign official ...

  26. ChatGPT: Everything you need to know about the AI chatbot

    ChatGPT, OpenAI's text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to hyper-charge productivity through writing essays and code ...

  27. The Kipper AI Advantage: The Best AI Essay Writer, AI Detector ...

    Kipper AI Essay Writer: Your Go-To AI Writing Tool. Without question, Kipper AI's main product, the AI essay writer, is one of the best writing tools available.This tool helps students write well ...

  28. FactChecking the Biden-Trump Debate

    Biden said Trump called U.S. veterans killed in World War I "suckers and losers," which Trump called a "made up quote." The Atlantic reported that, based on anonymous sources.