<< Back to Off-topic Forum   Search

Posts 1 - 15 of 15   
The Prospect of an Advanced AI: 3/31/2017 21:50:14

Level 60
A lot of this is fairly old news, but it still applies now, maybe even more so than 1 year/6 months ago.

So today I saw Sargon of Akkad's video about this from 7 months ago show up in my feed somehow:


And I looked up a few things about it:



Apparently there is nothing to fear until 2050, but what if we have a sudden breakthrough and lose connection with AI? Obviously I'm not the most knowledgeable in the world or even on this forum but I have seen how neural networks work and how rapidly they can respond.


The first like minute is very cringy, but overall a fantastic video showing how a neural network can replicate Bach music. How long until we cannot distinguish real artists from mechanical ones?

Overall, I think that AI is a big problem and will eventually turn out to have mostly negative impacts. While we may see huge benefits now to about 10 years, we are going to have some competition in the near future.
- downvoted post by Жұқтыру
- downvoted post by KARRRRRRRR UL
The Prospect of an Advanced AI: 4/2/2017 03:10:14

Level 60
That was actually mildly amusing, for a person who should be IP banned again.

Edited 4/2/2017 03:10:29
The Prospect of an Advanced AI: 4/2/2017 03:46:40

Level 60
Well no, having human partially controlled by robot could have them working against their own interests. Either we are doomed to submission or we have to fight.
- downvoted post by GeneralPE
The Prospect of an Advanced AI: 4/4/2017 15:23:42

Level 58
Tbh I'm more worried about technically illiterate armchair philosophers restricting the progress of humanity than I am about AI suddenly turning into SkyNet. Yes, there are dangers, but it seems like most of humanity is more interested in just jumping on that and screaming about the dangers- perhaps, one day, to the detriment of research- than it is in actually understanding what a near-future general AI could look like and what the actual challenges/risks are. I strongly and sincerely doubt that a general AI could harm humanity any more than humanity seems to want to itself.

Furthermore, while the field of machine learning is making incredible leaps, I don't think we understand enough about generalized learning and how brains work (certainly not the way computers work- we're not going to just hit a memory/processing power threshold and voila, our computers can suddenly think and want; the architecture of our brains and of our computers is fundamentally different in some interesting ways- consider, for example, tasks like calculation that computers just completely outclass us in vs. tasks like tone recognition where computers at the moment stand absolutely no chance) to actually find ourselves on the verge of creating general AI. We're just beginning to understand how to make computers actually learn simple human tasks like image classification- and even the deep neural nets we have right now are built on top of these assumptions we make with Turing machines.

A general AI isn't just a machine that can classify images, effectively control autonomous vehicles, make music, or beat humans at Go. It has the ability to understand arbitrary fields, fine-tuning itself and to some extent even building its own models, to have some degree of consciousness and the ability to need and want things other than more memory, processing power, etc.- to feel complex more-than-sensory things like pain and pleasure and perhaps to even decide to pursue or avoid some of them.

Remember when we got excited about human space exploration when we basically took a few footsteps in our own cosmic backyard? Right now, we're on the verge of maybe a moon landing for AI/ML... and maybe we should worry about something other than the exciting sci-fi possibilities of AI (from a lazy pseudo-philosophical standpoint, too- is this really anything better than self-fellation, to remark on these unrealized and poorly-understood prospects and fascinate ourselves without actually striving to understand the subject at hand like a philosopher would or contributing to the development and understanding like a scientist would?). I find it so odd how the human interest in control and domination also seems to often couple itself with laziness, but perhaps it's really not that surprising that the sort of people concerned with fear and regulation aren't particularly interested in spending time to learn, understand, or build.
The Prospect of an Advanced AI: 4/4/2017 16:59:12

Level 58
It threatens humans for competition for anything.

Machines may become intelligent on their own and it is just mainly machine life from now on...


In one Isaac Asimov novels, an AI that writes books was invented. That meant that writer was now a dead profession and some of the creativity was gone.

This is like the same if you find a way to mass produce songs. All the people will sell rap and rock'n'roll because it sells very well.
The other types of less dominant music will become extinct.

Mass production can be bad. We also don't need AI that much to innovate or solve problems that no one else wants to solve...

Humanity has invented a lot by himself without all the techno-tools magic.

Machines may de-sacrilize human creativity in different ways. So it is bad because against human nature.

It is possible that machines become one million times better at creating than humans.
The Prospect of an Advanced AI: 4/4/2017 18:57:21

[REGL] Pooh 
Level 62
I'm not too worried about a skynet situation, but rather economics.

As an aside: I like this guy's writing, and he's got a couple articles on point. Here's one:

Returning to topic, Two work-force disruptive AI technologies:

-Customer Service
-Self Driving Automobiles

These two sectors represent a significant numbers of jobs that will become made redundant by AI. I read one article that said about 2/3 of all customer service calls can be remedied by an AI like Watson. Uber already has self driving cars in prototype, and I've written a bit of technology papers about self-driving long-haul trucking.

These jobs won't be coming back. Taken over by computers that are available 24/7 and don't need health insurance. Now 1 technician can run the computers that drive 100 trucks or answer 1000 phone calls.

How does this trickle down/up? Theoretically costs would go down significantly, making living cheaper. But now we'll have swaths of people without jobs.

Bill Gates said we should tax the robots that replace a human job, and this goes towards supporting a universal income. Irony is, how many jobs did Bill Gates kill while making himself rich? Should there be an income tax required by our computers that have Microsoft Excel (no more number cruncher jobs) and our printers (no more printing press jobs).
The Prospect of an Advanced AI: 4/4/2017 20:19:27

Level 55
Throughout all of human history, everywhere, technologic progress has obsoleted many many jobs, mammoth shares of the work industry, and the reaction has always been the same from some - trying to stop technologic growth. But technologic growth, it's never led to less employment, it just leads to new jobs, and I don't see why this is any different. And so folk will get new jobs.
The Prospect of an Advanced AI: 4/4/2017 20:41:35

Level 58
@Pooh: That's a fair set of concerns and certainly would not require advanced AI to accomplish.

But: a) I don't think these jobs will actually not be replaced- technological and economic advancement have always disrupted job security and it's a toss-up as to whether this time's the exception to the rule that advancement "creates" more jobs than it takes away (then again, progress is not automatic and it comes down to whose hands are on the steering wheel); and b) if it does become the case that AI takes away a significant chunk of human jobs, then we need to fix our apparently inflexible and overly volatile economic models, not our technology.

Bill Gates said we should tax the robots that replace a human job, and this goes towards supporting a universal income.

While that's a popular quote, unfortunately it's neither viable nor likely to be effective.


To pull from that (short, worthwhile) article:

"I don't think you'd want to sort of tax all labor-saving devices," said William Gale, an economist who is the co-director of the Tax Policy Center in Washington. "That would put a big crimp on productivity growth, and so I think there needs to be a justification for the tax and a definition of what we're calling a robot here."

On top of that, that tax would hamper technological and economic advancement.

The thing about replacing human worker with robots is that it does not decrease the production of value- i.e., in terms of economic output, we're not losing out. It's a concern about the distribution of wealth not overall economic health, and so perhaps we should just do away with our (once helpful, no doubt) notion that we should base the distribution of societal wealth on how much we value an individual's participation in our large-scale economic system (i.e., the idea that people's ability to secure opportunities for themselves and people they care about should be based on the value placed on their economic production by existing market structures or their boss).

It's odd that we're worried about having a shortage of jobs rather than resources- certainly counterintuitive and ultimately purely a consequence of the economic systems we choose to live under. Instead of becoming anxious about this, we could be celebrating a key step toward a post-scarcity society where economic output can be managed through different mechanisms and where people can easily obtain the resources they need to pursue their passions, take risks, and build new things (that, at the moment, robots can't do- and likely won't be able to do very well even after we stumble into general AI).

Certainly, the incentives for ownership (not entrepreneurship, but ownership) placed by our current iteration of capitalism might not be that helpful in a world where labor is mostly automated. What if, instead of rewarding someone for merely owning a factory and a distribution chain that would at that point be handled mainly by robots, we instead supplied people with the opportunity to do something that would be sorely lacking in most automated economies (and yet be their main driver of growth)- create new products, take risks, understand, explore, and express themselves?

For the first time ever in our history, we as a species will have the opportunity to not live paycheck-to-paycheck and to enable ourselves to pursue grand dreams untethered by pressing immediate needs.

Why are we afraid of this sort of economic transformation? What does it say about our developments so far that we're unable to embrace it? It's so curious that the same bright future that drives some researchers out of bed in the morning scares the heebie-jeebies out of so many others.

So don't be afraid of AI; be afraid of humans.

TL;DR: Don't use the economic profits of automation to feed more Trumps, Rineharts, and Waltons; re-engineer economic distribution channels so instead we can build up more Ferraris, Musks, and Curies. Feed human progress, not human decadence.

Edited 4/4/2017 20:44:23
The Prospect of an Advanced AI: 4/4/2017 20:56:49

Level 58
It's just a question of what sort of behavior you choose to create incentives/disincentives for through your economic system- i.e., how people obtain money/the ability to acquire resources.

Difficult question, of course, just as how it must'be been difficult for the Minoans to realize that their palace distribution economy was not ordained by nature. At the present, our incentives for creation are really just incentives for control- i.e., the ability to reap some share of economic output created using tools that you legally control only directly incents ownership, not the process of creating and acquiring those tools that we're probably actually more interested in (after all, our grand stories of potential are about entrepreneurship and invention, not owning something that already exists)- and it's certainly not the case that the two are naturally entangled to the point that you can't extract one activity from the other.

Instead of patching our economic system, perhaps it's time to make another change- it's been a few centuries since we swapped mercantilism for the current set of systems, anyway. Having and creating are not the same thing, and post-automation they will be less intertwined than ever.

Edited 4/4/2017 20:58:14
The Prospect of an Advanced AI: 4/5/2017 02:13:55

Level 58
Yep, and he channeled those resources to create progress in a field where privatization is a rather difficult task. Counts as success in my book.
The Prospect of an Advanced AI: 4/5/2017 02:32:36

Level 60
Well Skynet isn't really what I'm most worried about, as many people said - outsourcing of human jobs to AI can create immense wealth inequality, because some people just aren't suited to work in STEM fields.
The Prospect of an Advanced AI: 4/5/2017 17:58:57

Castle Bravo
Level 56
>Make super genius AI to deal with world's problems

>It wipes out the non-white races b/c the Marxists forgot a line of code
Posts 1 - 15 of 15