Singularity and Politics


#1

Can discussions about the technological singularity go here?

If it pertains to politics?

Like why have a bunch of humans running the government when you can use an artificial intelligence. The main reason is that it will function with less bias and make more rational decisions. No, it’s not going to simply be up to “how it’s programmed” and that argument is so stupid because it’s going to be “programmed” in the exact same way people are. Both humans and the intelligence would be given an inherent set of instructions i.e. eat, life, reproduce, learn or learn, adapt, reprogram, respectively, then develop themselves based on the vectors of their environment and in turn, their decisions which shape themselves. Why I put an emphasis on “less” bias, is because even a super higher intelligence will still function with bias, but you can bet your bottom dollar that their decisions are going to be less affected by it than the standard human, especially if they can acknowledge their own biases. Time and time again, it can be proven that a machine (in this sense, a program) can out preform a human at any ask, and I believe running a government is no exception. Granted, it won’t be perfect, but it will be a lot better than what any number of humans could pull off, especially since human error is also a thing. Additionally, said artificial intelligence would always have the internet at it’s disposal. Imagine the cross referencing it could do on a senate floor. Right on the spot, it can give statistics from any number of sources for global warming, crime rates, and so on.


#2

How in the world are you going to get one hundred AIs created for the intention of policy making in order to complete replace Senate?

Unless those one hundred AIs are from one hundred different backgrounds and have one hundred different experiences, implementing such a system only kills democracy and the power it has.

Then you get into a situation where you’d need to assume the AI would always make the best decision? Sounds like a straight forward path to a dystopia.

Bias is inherent in any law, you can’t come to me and tell me that there’s such thing as a completely unbiased law or policy.

And can you show me any evidence that shows that they’d be efficient with legislation, that’d mean they’d have to make good legislation too and “good” is subjective.

There’s no such thing as a universal good.

I can understand if you want to replace fields like math or engineering or something along those lines with AI, but there’s no possible way that a system like that can be implemented in something as multifaceted and subjective such as policy making or the government. The reason America is a democracy is that people make the decisions, not robots.


#3

While on the surface your theory is correct, it makes some assumptions that conceal your own bias to the subject.

You place in the statement above the wisdom to instill within AI all of the emotion of being human and every perspective of being human. You say that it is programmed in the exact same way people are. Well, people are most certainly not programmed the same. We can see with rudimentary attempts at taking people out of the equation just how wrong the likes of Facebook got their algorithms… They were premised on the bias of the coder and consciously or unconsciously set to filter news and comments that they did not approve of. While the algorithm was rational in that it treated each bit of news with the exact same criteria, it had a built in bias. A second attempt by twitter was actually worse… first it was again biased in its approach to a comment, took that bias and within a short time started spewing the same hateful language that expressed that bias.

When you add to that the need for the AI to evolve, then you run into the problem of it creating self protections. They may not be as ominous as the vengeance of HAL but it could be one where it becomes a ‘teacher’ to its human advisories.

Then of course, just as with all other technological creations of man, someone will employ it on the dark side. Someone will develop a ‘viruses’ to corrupt your AI so that THEY can have the advantage… World governments are already working on competing droids that seek out and destroy those of the adversary…

As for your Super Senator… they are currently being given the information that others want them to see… have you ever seen posted on the internet the scientific base temperature data that is used by models to tell us that we have ‘global warming’?.. no, you have the data that has been massaged, ‘corrected’ for this imagined variation or that… and data that is deliberately altered for political purpose… AI might be able to start a baseline but it must depend on information fed it by the same people who are attempting to skew your opinion and depending on your response, shut you up.


#4

Would the ending be like terminator?


#5

No… I am afraid that the terminator is merely an interim necessity for Cyberdyne…


#6

Cbyerdine running the government.
An interesting concept as they certainly were for reduction in world population resulting in less pollution and the end of the climate change debate.


#7

As Putin points out in this article, AI will first be used to determine the ideological winner… then a homogeneity will be determined by they programmer who wins…

http://www.houstonchronicle.com/business/technology/article/Putin-Leader-in-artificial-intelligence-will-12166704.php