Is ChatGPT a ‘virus that has been released’? • TechCrunch

More than three years ago, this editor met Sam Altman for a small event in San Francisco shortly after he stepped down as president of Y Combinator to become CEO of the artificial intelligence company he co-founded in 2015 with Elon Musk. . and others, OpenAI.

At the time, Altman described the potential of OpenAI in language that sounded strange to some. Altman said, for example, that the opportunity with artificial general intelligence — machine intelligence that can solve problems as well as a human — is so great that if OpenAI can figure it out, the team could “maybe capture the light cone of all future value”. In-universe.” He said the company “is going to have to not publish research” because it’s so powerful. Asked if OpenAI was guilty of instilling fear, Musk has repeatedly called for all organizations that develop AI to be regulated , Altman spoke about the dangers of not thinking about “social consequences” when “you’re building something on an exponential curve.”

The audience laughed at various points in the conversation, unsure how seriously to take Altman. However, no one is laughing now. While machines are not yet as smart as people, the technology that OpenAI has since released is surprising many (including Musk), and some critics fear it could be our downfall, especially with the more sophisticated technology reportedly let me know, it will arrive soon.

In fact, while heavy users insist it’s not that smart, the ChatGPT model that OpenAI made available to the general public last week is just as capable of responding as it is.You ask as a person that professionals in a variety of industries are trying to process the implications. Educators, for example, wonder how they will be able to distinguish original writing from the algorithmically generated essays they are required to receive, and that can evade anti-plagiarism software.

Paul Kedrosky is not an educator per se. He is an economist, venture capitalist, and fellow at MIT who calls himself a “frustrated normal with a penchant for thinking about risks and unintended consequences in complex systems.” But he is among those who are suddenly worried about our collective future, tweeting yesterday: “[S]Shame on OpenAI for dropping this unrestrained pocket nuke on a society that wasn’t ready.” Kedrosky wrote: “Obviously I feel that ChatGPT (and its ilk) should be retired immediately. And, if it is ever reintroduced, only with strict restrictions.”

We spoke to him yesterday about some of his concerns, and why he thinks OpenAI is driving what he believes is the “most disruptive change the US economy has seen in 100 years,” and not in a good way.

Our chat has been edited for length and clarity.

TC: ChatGPT came out last Wednesday. What sparked your reaction on Twitter?

PK: I’ve played with these conversational user interfaces and AI services in the past and this is obviously a big leap further. And what concerned me here in particular is the casual brutality of it, with massive consequences for a number of different activities. They’re not just the obvious ones, like high school essay writing, but in almost any domain where there’s a grammar: [meaning] An organized way of expressing yourself. That could be software engineering, high school essays, legal documents. All of them are easily eaten by this ravenous beast and spat out again with no compensation to whatever was used to train it.

I heard from a colleague at UCLA who told me that they have no idea what to do with the essays at the end of the current term, where they get hundreds per course and thousands per department, because they no longer have a clue what’s fake and what’s fake. . not. So doing this so casually, as someone told me today, is reminiscent of the call [ethical] white hat hacker who finds a bug in a widely used product, then informs the developer before the general public knows so the developer can patch their product and we don’t have mass devastation and power grids go down. This is the opposite, where a virus has been released into the wild without concern for the consequences.

It feels like it could take on the world.

Some might say, ‘Well, did you feel the same way when automation came to auto plants and auto workers were put out of work? Because this is a broader type of phenomenon. But this is very different. These specific learning technologies are self-catalyzing; they are learning from requests. So the robots in a manufacturing plant, while disruptive and created incredible economic consequences for the people who work there, didn’t just turn around and start absorbing everything that was going on inside the factory, moving sector by sector, while that not exactly what we can expect. but what to expect.

Musk left OpenAI in part over disagreements over the company’s development, he said in 2019, and has been talking about AI as an existential threat for a long time. But people complained that they didn’t know what he was talking about. We are now faced with this powerful technology, and it is unclear who is stepping in to address it.

I think it’s going to start in a lot of places at once, most of which will look very clunky, and people [then] make fun of it because that’s what technologists do. But too bad, because we got into this by creating something with such consequence. So in the same way that the FTC required people who blogged years ago [make clear they] having affiliate links and making money from them, I think on a trivial level people will be forced to disclose that ‘We didn’t write any of this. All this is generated by a machine. [Editor’s note: OpenAI says it’s working on a way to “watermark” AI-generated content, along with other “provenance techniques.”]

I also think we will see new energy in the ongoing lawsuit against Microsoft and OpenAI for copyright infringement in the context of our machine learning algorithms in the making. I think there’s going to be a broader DMCA issue here regarding this service.

And I think there is the potential for a [massive] lawsuit and settlement eventually regarding the consequences of the services, which, as you know, will probably take too long and not help enough people, but I don’t see how we don’t end up in [this place] regarding these technologies.

What is the thinking at MIT?

Andy McAfee and his group are more optimistic and have a more orthodox view that every time we see disruption, other opportunities are created, people are mobile, moving from place to place and occupation to occupation, and we shouldn’t be so reticent that we think that this particular evolution of technology is the one around which we cannot mutate and migrate. And I think that’s generally true.

But the lesson of the last five years in particular has been that these changes can take a long time. Free trade, for example, is one of those incredibly disruptive experiences for the whole economy, and we all told ourselves as economists watching this that the economy will adapt and people in general will benefit from lower prices. What no one anticipated was that someone would organize all the angry people and elect Donald Trump. So there’s this idea that we can anticipate and predict what the consequences will be, but [we can’t].

You talked about writing essays for high school and college. One of our children has already asked, theoretically! — whether it would be plagiarism to use ChatGPT to write an article.

The purpose of writing an essay is to show that you can think, so this short-circuits the process and defeats the purpose. Again, in terms of consequences and externalities, if we can’t let people have homework because we no longer know if they are cheating or not, that means everything has to happen in the classroom and has to be supervised. There can be nothing for us to take home. More things need to be done orally, and what does that mean? It means that school got a lot more expensive, a lot more crafty, a lot smaller, and at the exact moment we’re trying to do the opposite. The consequences for higher education are devastating in terms of the actual provision of a service.

What do you think about the idea of ​​a universal basic income or allowing everyone to share in the benefits of AI?

I am a much less strong advocate than I was before COVID. The reason is that COVID, in a sense, was an experiment with a universal basic income. We paid people to stay home and they came up with QAnon. So I’m very nervous about what happens when people don’t have to get in a car, drive somewhere, do a job they hate, and come home again, because the devil finds work for idle hands, and there will be plenty of it. of idle hands and much mischief.

Source: news.google.com