Jump to content

Building robot McDonald's staff 'cheaper' than hiring workers on minimum wage


Recommended Posts

  • Replies 140
  • Created
  • Last Reply

Top Posters In This Topic

15 hours ago, atl falcon 89 said:

I could envision a dystopian fictional setting where they simply plug your case into a database that serves as Judge/Lawyer/Jury and it considers the evidence and pulls precedent from memory banks and out pops your verdict. The end of freedom as we know it sure but at least it would be fair towards race gender and income. (in theory) :ninja:

Not like we don't already have the technology, we just need a fuq'd up situation enough they could get away with implementing it.

Unless it can think, there will never be any new precedent with this system. 

Link to comment
Share on other sites

12 hours ago, Free Radical said:

Norway, Sweden, and Finland all have government run educational systems, better pay do to collective bargaining (nearly everyone belongs to a union), better benefits, and all workers are guaranteed 25 paid vacation days on top of the 16 work-free holidays a year. 

Yet they score higher than us in educational scores, average wages, debt compared to GDP, with higher economic upward mobility, and less corruption. How can they do it, but we can't?  

None of those things are bad ideas but I've never understood comparing America to countries that are roughly the size of Montana and have the GDP roughly the same as Illinois. It's just not a good comparison.

Link to comment
Share on other sites

1 minute ago, Free Radical said:

Why? How do any of these concepts not scale? More overhead? That seems to be the extent of it. 

Some do like vacation and maternity leave and such. Others like fixing the education system and corruption are far more complex in a country that is as large and diverse as ours is. Our government is already too large for it's own good and waste is chronic across the board. Scaling up and adding more layers of government at this point is counterproductive. Turning this thing around is going to have to take a bottom up approach. The federal government is going to have to rely on states to get the job done and stop trying to run it all themselves.

Link to comment
Share on other sites

6 minutes ago, Free Radical said:

Why? How do any of these concepts not scale? More overhead? That seems to be the extent of it. 

They're pretty homogeneous in the sense that the needs of the population doesn't significantly differ in one part of the country to the next. Due to that it's fairly easy to implement unitary policies. That is very, very far from the case in the United States which is further muddled by the states and federal government butting heads.

It also ignores issues such as the massive tax rates in Sweden and Denmark, how Norway is largely funding its policies with oil money, and so on and so forth.

Link to comment
Share on other sites

5 minutes ago, Psychic Gibbon said:

They're pretty homogeneous in the sense that the needs of the population doesn't significantly differ in one part of the country to the next. Due to that it's fairly easy to implement unitary policies. That is very, very far from the case in the United States which is further muddled by the states and federal government butting heads.

It also ignores issues such as the massive tax rates in Sweden and Denmark, how Norway is largely funding its policies with oil money, and so on and so forth.

Needs of the population? Expand on that point. 

Link to comment
Share on other sites

13 minutes ago, Free Radical said:

Needs of the population? Expand on that point. 

An example would be cost of living. It isn't very different depending on where you live in those countries so issues related to that can be addressed in a singular way. Meanwhile, cost of living can differ wildly depending on where you live. It's why Sanders' $15 minimum wage is asinine since $15 per hour would help get you by in places like NYC and San Francisco but it's a good deal of money in, say, Mississippi.

Link to comment
Share on other sites

17 hours ago, Gritzblitz 2.0 said:

Automation has been taking jobs for the last century or so. In bowling alleys, there used to be an actual human being that reset the pins and returned your ball. Jobs are rendered obsolete by technology all the time. 

Yes of course. I was more commenting on the exponential growth in automation we are likely to see and the effects it would have on a population that is also experiencing growth. There simply isn't enough jobs now that pay a living wage. I'd imagine times are about to get real bad for a great pct of the population. 

Link to comment
Share on other sites

1 hour ago, ransack said:

Fast food jobs were never meant to be "careers". If that is your "career" maybe you need to reassess you life choices.

Yet someone has to do it, and Ive known people who have worked in fast food or grocery stores for a decade or more. The jobs aren't strictly held by teenagers. As an aside, if these entry level jobs aren't available, there will not be a way to work your way up.

Again, the low skill/low education jobs are dwindling, and we can't compete with slave labour from Asia. So what do we do? 

Link to comment
Share on other sites

42 minutes ago, Free Radical said:

The minimum wage in those countries are nonexistant. They do collective bargaining through unions, which is better in my opinion than a flat minimum wage.

Are most of their unions controlled by the fraternal order of Sicilian businessmen like ours are?

Link to comment
Share on other sites

24 minutes ago, Free Radical said:

Yet someone has to do it, and Ive known people who have worked in fast food or grocery stores for a decade or more. The jobs aren't strictly held by teenagers. As an aside, if these entry level jobs aren't available, there will not be a way to work your way up.

Again, the low skill/low education jobs are dwindling, and we can't compete with slave labour from Asia. So what do we do? 

Bring manufacturing back on shore (buy American), reduce barriers for small businesses, reduce mergers and the borderline monopolies that big companies have attained. Those won't fix everything but it would be a good start.

Link to comment
Share on other sites

23 minutes ago, ransack said:

Bring manufacturing back on shore (buy American), reduce barriers for small businesses, reduce mergers and the borderline monopolies that big companies have attained. Those won't fix everything but it would be a good start.

Prices will go up, because like I said, we can't compete with slave labour. 

Link to comment
Share on other sites

15 hours ago, Free Radical said:

Norway, Sweden, and Finland all have government run educational systems, better pay do to collective bargaining (nearly everyone belongs to a union), better benefits, and all workers are guaranteed 25 paid vacation days on top of the 16 work-free holidays a year. 

Yet they score higher than us in educational scores, average wages, debt compared to GDP, with higher economic upward mobility, and less corruption. How can they do it, but we can't?  

Those three countries also have a grand total in population to New York State.

5,9, and 5 million citizens vs 300 million. Not exactly an apples to apples comparison.

Link to comment
Share on other sites

15 hours ago, Leon Troutsky said:

It's American Exceptionalism.

We can do ANYTHING***

***Except decent wages for workers

***Except quality health care for all citizens

***Except good educational opportunities for everyone

***Except reducing mass shootings

***Except pretty much anything that would make society a better place.

 

 

15 hours ago, Free Radical said:

But we're ******* excellent at bombing other countries! God ******* ****** are we good at that. 

I'll never understand why some people see leftists as America haters, just makes no sense a'tall

 

 

Link to comment
Share on other sites

3 hours ago, mdrake34 said:

Unless it can think, there will never be any new precedent with this system. 

Quote

I get that most here don't share my alarmist opinion regarding AI, but this criteria has been a goal for some time. If this is your main condition, it will be met sooner than you might think.

A long article, but very interesting and pretty **** worrisome as well.

http://www.wsj.com/articles/when-machines-think-and-feel-1458311760

 

Machines That Will Think and Feel

Artificial intelligence is still in its infancy—and that should scare us

What will artificial intelligence accomplish and when?

Artificial intelligence is breathing down our necks: Software built by Google startled the field last week by easily defeating the world’s best player of the Asian board game Go in a five-game match. Go resembles chess in the deep, complex problems it poses but is even harder to play and has resisted AI researchers longer. It requires mastery of strategy and tactics while you conceal your own plans and try to read your opponent’s.

Mastering Go fits well into the ambitious goals of AI research. It shows us how much has been accomplished and forces us to confront, as never before, AI’s future plans. So what will artificial intelligence accomplish and when?

AI prophets envision humanlike intelligence within a few decades: not expertise at a single, specified task only but the flexible, wide-ranging intelligence that Alan Turing foresaw in a 1950 paperproposing the test for machine intelligence that still bears his name. Once we have figured out how to build artificial minds with the average human IQ of 100, before long we will build machines with IQs of 500 and 5,000. The potential good and bad consequences are staggering. Humanity’s future is at stake.

Suppose you had a fleet of AI software apps with IQs of 150 (and eventually 500 or 5,000) to help you manage life. You download them like other apps, and they spread out into your phones and computers—and walls, clothes, office, car, luggage—traveling within the dense computer network of the near future that is laid in by the yard, like thin cloth, everywhere.

AI apps will read your email and write responses, awaiting your nod to send them. They will escort your tax return to the IRS, monitor what is done and report back. They will murmur (from your collar, maybe) that the sidewalk is icier than it looks, a friend is approaching across the street, your gait is slightly odd—have you hurt your back? They will log on for you to 19 different systems using 19 different ridiculous passwords, rescuing you from today’s infuriating security protocols. They will answer your phone and tactfully pass on messages, adding any reminders that might help.

In a million small ways, next-generation AI apps will lessen the friction of modern life. Living without them will seem, in retrospect, like driving with no springs or shocks.

But we don’t have the vaguest idea what an IQ of 5,000 would mean. And in time, we will build such machines—which will be unlikely to see much difference between humans and houseplants. The software Go master is a potent reminder of how far we’ve already come down this road. How would we fare in a world of superhuman robots?

Consider a best case. Human beings like butterflies, encourage their presence, don’t push them around. Still, we can’t tell one butterfly of any given species from another. They are so unlike us that their individual characteristics are beyond our discernment.

Some children still collect butterflies. It’s a hobby to encourage (particularly as an alternative to videogames and social network blather); you get outside and learn about nature. If supersmart robots treat us like butterflies, we will be lucky—but don’t count on it. We can’t support ourselves by browsing flowers, decoratively.

Why, then, are we building machines whose intelligence could swamp and overwhelm ours? Machines whose behavior and attitude to mere humans are impossible to foresee?

Because it is human nature to learn as much as we can and to build the best machines we can. Of course, that doesn’t make it smart in any particular case. Robots with superhuman intelligence are as dangerous, potentially, as nuclear bombs. Yet they are the natural outcome of robots with “ordinary” intelligence—which are (in turn) the goal of AI researchers working hard, right now, all over the world.

Thoughtful people everywhere ought to resolve that it would be unspeakably stupid to allow technologists to fool around with humanlike and superhuman machines—except with the whole world’s intense scrutiny. Technologists are in business to build the most potent machines they can, not to worry about consequences.

So where do we stand? How long before we arrive at these hugely dangerous computers?

For now, we are not even headed in the right direction. AI today is nowhere near understanding the human mind. It’s trying to get to California (so to speak) without ever leaving I-95.

But a breakthrough won’t be hard. We only need to look at things from a slightly different angle—which might happen in a hundred years or this afternoon. Once it does, we had better start following developments as carefully as we would monitor lab technicians playing with plague bacteria.

The issues are surprisingly simple. Most scientists and philosophers identify human thought with rationality, reasoning, logic. Even if they see emotion as important, they rarely see it as central—and even if they do that, they seldom grasp the remarkably simple way that it relates to rational thought and the mind at large.

Filling in this gap in our knowledge is dangerous: It moves us a step closer to superhuman robots. But learning is our fate. It’s impossible to stop. Our best bet is to discover all that we can and to move forward with our eyes wide open.

The human mind is no static, rational machine. Nor is it sometimes rational and sometimes emotional, depending on mood or whim or chance. The mind moves down a continuous spectrum every day, from moments of pure thinking about things to moments of pure being, experience or feeling.

At the start of the day, we tend to be mentally energetic; we tend to start near the spectrum’s top. At the day’s end, we’re at the other end of the spectrum. When we grow sleepy, we are down-spectrum, moving steadily lower. Asleep and dreaming, we are near the bottom.

During the day, we generally go through two main oscillations—drifting downward toward a temporary low point (often in middle or late afternoon), then drifting higher for a few hours, and finally downward again straight into sleep. Everyone has his own pattern, but our general course over a day is from up-spectrum to down.

Software could imitate this spectrum—and will have to, if it is ever to achieve humanlike thought.

The spectrum’s top edge is what we might call thinking-about—pondering the morning news, or the daffodils outside or the future of American colleges. At the opposite end, you reach a state of pure being or feeling—sensation or emotion—that is about nothing. Chill or warmth, seeing violet or smelling cut grass, uneasiness or thirst, happiness or euphoria—each must have a cause, but they are notabout anything. The pleasant coolness of your forearm is not aboutthe spring breeze.

Over the day, the mind moves from one kind of mental state to a very different kind, from mental apples to mental oranges.

We can only reproduce this spectrum in software if we can reproduce the endpoints. Thinking-about can be simulated on a computer. But no computer will ever feel anything. Feeling is uncomputable. Feeling and consciousness can only happen (so far as we know) to organic, Earth-type animals—and can’t ever be produced, no matter what, by mere software on a computer.

But that doesn’t necessarily limit artificial intelligence. Software can simulate feeling. A robot can tell you that it’s depressed and act depressed, although it feels nothing. AI could, in principle, build a simulated mind that reproduced all the nuances of human thought and dealt with the world in a thoroughly human way, despite being unconscious. After all, your best friend might be a zombie. (Would it matter? Of course! But only to him.)

Still: No artificial mind will ever be humanlike unless it imitates not just feeling but the whole spectrum. Consider how your mind changes as you slide down-spectrum—gradually mixing together feeling and logical analysis, like a painter mixing ultramarine blue into cool red, producing reds, red-violets, purples, purple-blues. The human mind is this whole range of shades, not just rationality, nor rationality plus a side-order of emotion, nor the definitive list of five or six kinds of mental state some textbooks give. The range is infinite, but the formula for creating this infinity is simple.

At the spectrum’s top, we concentrate readily, focus on external reality, solve problems. Memory is docile. It supplies facts, figures and methods without distracting us with fascinating recollections. Emotion lies low. Early mornings are rarely the time for dramatic arguments or histrionic scenes.

As we move down-spectrum, we lose concentration and reasoning power and come gradually to rely more on remembering than on reasoning—on the “wisdom of experience”—to solve problems. Lower still, the mind starts to wander. We daydream: Our memories, fantasies, ideas are growing vivid and distracting as the mind’s focus moves from outside the self to inside.

Finally, we grow sleepy and find ourselves free-associating—memory now controls the mental agenda—and we pass through the strange state of “sleep onset thought” on the way to sleep and dreaming. (Sleep onset thought is usually a series of vivid, static, memories, some or all of them hallucinations—like dreams themselves.)

Emotions grow more powerful as we move downward. Daydreams and fantasies often move us emotionally. Dry, emotionless daydreams are rare. Dreams sometimes have a vaguely unpleasant emotional tone, but they can also make us elated or terrified: The strongest emotions we know happen in dreams.

Up-spectrum, thought uses memory like a faithful, colorless assistant. Down-spectrum, memory increasingly goes off on its own. The flow of information from memory gradually supplants the flow from outside as we sink into ourselves like a flame sinking into a candle.

The spectrum shows that dreaming is no strange intrusion into ordinary thought. It is the normal outcome of descending the spectrum, like rolling down the runway once the airplane has landed. We experience our dreams but don’t think about them much as they unfold; we allow all sorts of improbabilities and absurdities to happen without minding. This means less self-awareness and less memory, which is one reason why down-spectrum thinking has been so hard to grasp, from Joseph’s work in Genesis to Freud’s in Vienna.

We need to understand all of this if we are to understand the mind. But does AI need to reproduce it all to achieve humanlike intelligence?

Every bit of it. One important down-spectrum activity is creativity. Reporters of personal experience tend to suggest that creativity happens when they are not thinking hard about a problem, when mental focus is diffuse—in other words, when they are down-spectrum. Creativity often centers on inventing new analogies, which allow us to see an old problem in the light of a new comparison.

But where does the new analogy come from? The poet Rilke compares the flight of a small bird across the evening sky to a crack in a smooth porcelain cup. How did he come up with that? Possibly by using the fact that these very different things made him feel the same way.

Emotion is a hugely powerful and personal encoding-and-summarizing function. It can comprehend a whole complex scene in one subtle feeling. Using that feeling as an index value, we can search out—among huge collections of candidates—the odd memory with a deep resemblance to the thing we have in mind.

Once AI has decided to notice and accept this spectrum—this basic fact about the mind—we will be able to reproduce it in software. The path from the spectrum to the superhuman robot is no short hop; it’s more like a flight to the moon.

But once we had rocket engines, radios, computers and a bunch of other technologies in hand, the moon flight was inevitable (though it also took genius, daring and endless work). The spectrum is just one among the technologies that are necessary to make AI work. But it’s one that has been missing.

Humanlike and superhuman machines follow inevitably from the spectrum and other basic facts about the mind. That makes these fundamental ideas just as dangerous as the vision I started with: the building of super-intelligent computers. They are all natural outcomes of the human need to build and understand. We can’t shut that down and don’t want to.

The more we learn, the more carefully, critically and intelligently we can observe the dangerous doings of AI. Let normal people beware of AI researchers: “All who heard should see them there/ And all shout cry, Beware! Beware!/ Their flashing eyes, their floating hair!/ Weave a circle round them thrice/ And close your eyes with holy dread,/ For they on honeydew hath fed,/ And drunk the milk of Paradise.”

Is it strange to conclude a piece on artificial intelligence with Coleridge’s “Kubla Khan”? But AI research is strange.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share


×
×
  • Create New...