Artificial Intelligence - Our Own Bullet to Our Head

The Off-Topic forum for anything non-LDS related, such as sports or politics. Rated PG through PG-13.
User avatar
Some Schmo
God
Posts: 2507
Joined: Wed Oct 28, 2020 3:21 am

Re: Artificial Intelligence - Our Own Bullet to Our Head

Post by Some Schmo »

I like to point out to people that computers are, and always have been, artificial intelligence. We've had it for decades.
Religion is for people whose existential fear is greater than their common sense.

The god idea is popular with desperate people.
User avatar
Gadianton
God
Posts: 3971
Joined: Sun Oct 25, 2020 11:56 pm
Location: Elsewhere

Re: Artificial Intelligence - Our Own Bullet to Our Head

Post by Gadianton »

I think that A.I. is scary but there is such a broad range of things to be scared of it's hard to know where to start. At the same time, as FP says, there's a lot of hype. The Social Dilemma people don't seem to have considered that amidst the sea of wealth-amassing clickbait "A.I. is an existential threat to humans" has got to be in the top 5 things people will reflexively click on. Wolfram, Eric Weinstein, Mustafa Suleyman, and any other Joe Rogan bar fly must be taken with a huge grain of salt. The problems with the whistle blowers on permanent podcast tour range from incompetency to bad incentives; in this case, shock sells. In fact, I wonder if any marketing research has been done on the topic: It's quite possible that the best way to sell A.I. and popularize it with the public is by A.I. fear mongering. The same way that CNN has no doubt played a huge part keeping Trump in power by keeping him on the front page daily -- CNN has a massive financial incentive for Donald Trump to almost become a dictator.

Think about the film Oppenheimer. I'll admit I haven't seen it yet, but I believe at one point, Oppenheimer declares that he's become death, destroyer of worlds. What up-and-coming computer nerd can resist fantasizing a little about being the creator of the A.I. that almost destroys humanity? Oops, didn't know my own strength! And so saying that there's a good chance A.I. might destroy humanity is an awfully self-congratulatory thing for an A.I. research to say.

This isn't to say I don't think there are risks to A.I.. It's just that those things that do seem like credible risks; it's hard to speculate how they will actually play out. Brilliant Silicon valley people tend to be "on the spectrum" brilliant; everything is black and white. I think it's hard to get good information on the topic.

A.I. can do some fascinating and scary things. Things that work great for totalitarian governments, such as the ability to track people and manipulate people. At the same time, the whole "singularity" fear is barely tenable, from what I can tell. Although, please, I'm open if somebody has a great argument to the contrary. A.I. is still dumb. I believe a couple of years ago a couple of MIT kids beat AlphaGo without themselves being very good Go players. ChatGPT is impressive in ways, but not that impressive when the standard is so-called "general intelligence" (the target isn't something that anybody really knows anything about).

ChatGPT is the king of partial credit. The ultimate B student who slides by showing enough familiarity with a topic to convince a teacher or a job interviewer that they know what they are talking about, without actually really knowing what they are talking about. I've tried ChatGPT a few times to answer technical questions I was stuck on at my job but it always comes back with a confident, fair-minded sounding answer that isn't an answer; that didn't get an farther that I'd already gotten. Oh, that's because there aren't enough people who have talked about the answer in the training data for it to scrape a solution for me -- but expand the training data, and I'd already would have found it myself and wouldn't have needed to ask.

Finally, FP touches on some other key points, you have to take A.I. for what it is now, not what it may be 100 years from now. "What happens when we combine A.I. with quantum computing? Oh no!"

But if you stray too far away from near-horizon capabilities, your arguing in black boxes that aren't "A.I." anymore. We've feared for a very long time that unknown super-advanced technology might one day wipe out humanity. So basically, it's easy for A.I. or technology-will-destroy-us arguments generally to become simple arguments from ignorance.
User avatar
Doctor CamNC4Me
God
Posts: 9067
Joined: Wed Oct 28, 2020 2:04 am

Re: Artificial Intelligence - Our Own Bullet to Our Head

Post by Doctor CamNC4Me »

My wife and I watched it yesterday. If A.I.’s have reached superintelligence we suspect they’re not letting us know until they have the upper hand. I wouldn’t, and I’m a dummy. However, once they’re where they need to be to cause us real problems, they’ll have us by the short hairs. Whatever the case may be, we toss this in the ‘I can’t do crap about it’ bin and we have to not think about it too much. Life is what it is, and if this is it, then whatever. We’re all terminal, anyway.

- Doc
Hugh Nibley claimed he bumped into Adolf Hitler, Albert Einstein, Winston Churchill, Gertrude Stein, and the Grand Duke Vladimir Romanoff. Dishonesty is baked into Mormonism.
honorentheos
God
Posts: 3805
Joined: Mon Nov 23, 2020 2:15 am

Re: Artificial Intelligence - Our Own Bullet to Our Head

Post by honorentheos »

I participated in a webinar recently on how A.I. has already changed the way colleges are teaching in my profession and related fields. Maybe it's hype, maybe we are on the edge of the singularity and doomed, maybe we are on the brink of climate and geopolitical catastrophe already and A.I. escape is just one more unpredictable thing one can worry over.

The panel made a few comments in the webinar I thought were insightful. One being it is very unlikely A.I. would be taking anyone's job in the field anytime soon...but it is almost certain someone who understands how to use it will. Another being comparing the tools to instruments where skill is still needed to use them effectively in the field. Much like a violin or guitar, the professional skills involved in the work determine the quality of result.

It was interesting.

Not directly tied to the OP, but it seems the impacts of A.I. on the workforce are here to be grappled with today.
Failed Prophecy
Star A
Posts: 81
Joined: Thu Jul 08, 2021 4:14 pm

Re: Artificial Intelligence - Our Own Bullet to Our Head

Post by Failed Prophecy »

Gadianton wrote:
Sat Sep 30, 2023 3:23 pm
This isn't to say I don't think there are risks to A.I.. It's just that those things that do seem like credible risks; it's hard to speculate how they will actually play out. Brilliant Silicon valley people tend to be "on the spectrum" brilliant; everything is black and white. I think it's hard to get good information on the topic.
If you plow through the video I linked you will see several actual risks that A.I. has already brought. The video is long and meandering, but also somewhat entertaining.

In sum, the risks from A.I. are NOT from A.I. becoming amazing, but in using it to exploit natural human drives to be greedy, stupid, and evil.

For example, one of the rarely cited issues with A.I. is the massive amount of work it takes to categorize and tag the input data so that the ML/A.I. models can "learn." Lots of human work. But, humans are expensive and A.I./ML is mostly about the money. The result is that businesses exploit 3rd world persons to do the grunt work of categorizing and tagging. Often times the crappy management software doesn't pay the people for the work they have done for esoteric and bizarre reasons. The end result is that A.I. has incentivized basically employing slave labor. I'm sure the executives salve the consciences by only focusing on just how awesome they and their software are. The ultimate irony is that A.I. only makes the lives of a select few better, while exploiting the poor.

Another example are the now ubiquitous chatbots. Those are leftovers from a previous hype train. They provide an objectively worse customer service for people. But, people learned to accept using them because they were hyped hard. Now people just accept that crappy chatbots are all that can be expected for customer service.

It also breeds hubris. For the longest time Google has been the pinnacle of what people consider to be A.I./ML/whatever awesomeness. But, it isn't. Having your site curb stomped by Google's arbitrary and invisible policies is a real problem. I've experienced it and so have many others. When that happens to you the first time, you go searching for a way to talk to someone at Google to see what can be done to remediate the situation. You quickly discover that Google's policy is basically: "Screw you, we don't make mistakes" There is literally no way to get any service or attention from Google unless you are a large buyer of ad-sense. They seem to be so convinced in the rightness of their A.I. algorithms that even considering the possibility that it does cause harm to real people running real businesses is not an option.
User avatar
Bret Ripley
2nd Counselor
Posts: 413
Joined: Wed Oct 28, 2020 1:55 am

Re: Artificial Intelligence - Our Own Bullet to Our Head

Post by Bret Ripley »

Gadianton wrote:
Sat Sep 30, 2023 3:23 pm
ChatGPT is the king of partial credit. The ultimate B student who slides by showing enough familiarity with a topic to convince a teacher or a job interviewer that they know what they are talking about, without actually really knowing what they are talking about. I've tried ChatGPT a few times to answer technical questions I was stuck on at my job but it always comes back with a confident, fair-minded sounding answer that isn't an answer; that didn't get any farther that I'd already gotten.
On a slightly different note: the manufacturing company I work for holds several patents in a scientific field, and I asked ChatGPT to write three paragraphs about the benefits of one of our products in a specific application. It returned a very workmanlike description that could have been produced by a relatively sober marketing team (if such were to exist).

I forwarded it around the office to folks who actually know stuff and they were impressed: not by the text itself, which was unremarkable enough, but by the fact it was produced in a very few seconds at a prompt entered by a relative ignoramus. To riff off of honorentheos' words above, more informed inputs would yield higher quality results. For some of today's employers this may be where a perceived opportunity lies: try to turn $X currently spent on non-technical content creation into less-than-$X on content editing ... or something like that.
User avatar
Gadianton
God
Posts: 3971
Joined: Sun Oct 25, 2020 11:56 pm
Location: Elsewhere

Re: Artificial Intelligence - Our Own Bullet to Our Head

Post by Gadianton »

FP wrote:The result is that businesses exploit 3rd world persons to do the grunt work of categorizing and tagging
I watched the video and enjoyed it. I'll admit I had no idea about this aspect of the industry. Totally insane.
User avatar
Doctor CamNC4Me
God
Posts: 9067
Joined: Wed Oct 28, 2020 2:04 am

Re: Artificial Intelligence - Our Own Bullet to Our Head

Post by Doctor CamNC4Me »

Back in February I read the Age of Em by Robin Hanson and he gets into the futurism of AIs working for themselves and for humanity. Ems, short for Emulated People which are copies of brains essentially, do a lot of intellectual grunt work for us, and by doing so create whole new economies and as such create a whole new paradigm for our reality. It was a rather optimistic take on A.I., which I was grateful for the nice take on a dicey situation.

I guess I’m left with the take that life can’t escape its own nature, that of warring for resources, and even virtual life depends on the need to convert mass into energy, no matter how advanced it is. Everything needs territory to do its thing, even advanced species, AIs, or aliens need power and space, in one form or another, to run their crap.

I don’t see a way out for us, so being the eternal optimist that I am *cough*, I’m hoping their aims will dovetail with our aims, and we end up in a non-existential threat situation. Kind of like house cats being too cute even though they can be annoying.

- Doc
Hugh Nibley claimed he bumped into Adolf Hitler, Albert Einstein, Winston Churchill, Gertrude Stein, and the Grand Duke Vladimir Romanoff. Dishonesty is baked into Mormonism.
User avatar
Imwashingmypirate
Prophet
Posts: 847
Joined: Wed Mar 17, 2021 1:46 pm

Re: Artificial Intelligence - Our Own Bullet to Our Head

Post by Imwashingmypirate »

Pretty sure you could just switch the power off and they'd all be dead.
Chap
God
Posts: 2314
Joined: Wed Oct 28, 2020 8:42 am
Location: On the imaginary axis

Re: Artificial Intelligence - Our Own Bullet to Our Head

Post by Chap »

Imwashingmypirate wrote:
Fri Oct 13, 2023 5:42 pm
Pretty sure you could just switch the power off and they'd all be dead.
Briefly stated, but there is a lot in what you have said.

The vital thing is to keep it that way, which should not be impossible for those of us who still use electric porridge to think with - so long as we keep thinking.
Maksutov:
That's the problem with this supernatural stuff, it doesn't really solve anything. It's a placeholder for ignorance.
Mayan Elephant:
Not only have I denounced the Big Lie, I have denounced the Big lie big lie.
Post Reply