This in some respects reminds of the question about cloning humans. Technologically we can, but the question is should we.
While we are still a way off from making true artificial intelligence, I think it is likely we will one day. While there are a lot of problems with doing it, there does not appear to be any problems that break the rules of nature which is always the showstopper.
But then the question is should we. What are the ethical considerations of creating such consciousness?
Maybe. First, it is worth pointing out that “intelligence” is a quite poorly defined concept. There are different types of intelligence and it is very hard to pin down what it actually means to be “intelligent”. There is an extensive literature on the misleading nature of IQ tests that seek to quantify intelligence. But anyway, the brain is nothing more than a couple of kg of material that supports fast and re-configurable electrical connections and memory of various kinds, controlling the body via more electrical signalling. So, it is quite easy for me to imagine the construction of an artificial brain with synthetic connections. But then it would have to be programmed to some extent and set into a condition such that it could learn and become self-aware. At which point you have an ethical dilemma. Could you shut down such a device? Would that be murder? What rights should such a device have? All sci-fi stuff, but I can’t see why this should not be possible from a technical point of view at some stage.
Hi jamesy,
It is certainly possible one day for there to be wide-scale production of robots with AI (machines that can think for themselves), but there are several ethical issues that would need to be to considered before doing this. You would have to think about the rights that they should be given, and things like whether turning them off would be considered murder/a crime. We would have to be very careful if we got to the point of co-existing with AI – they could probably overthrow humans, and I’m sure we definitely wouldn’t want to end up living in a world like in The Terminator movies! 🙂
The others have covered this pretty well but I would like to add one thing. A project was launched a little while ago to try and build a machine that could make a copy of itself. We are now nearly at that point with machines called 3D printers. You put in a 3D image and it prints an object using a variety of different techniques. What’s scary is that there are even computers now that can take your design and identify what is wrong with it and make improvements. This means that you could design a 3D printer that can make more copies of itself that are improvements on your design and every time the design becomes more and more improved and its all automatic!
Hi Jamesy,
This in some respects reminds of the question about cloning humans. Technologically we can, but the question is should we.
While we are still a way off from making true artificial intelligence, I think it is likely we will one day. While there are a lot of problems with doing it, there does not appear to be any problems that break the rules of nature which is always the showstopper.
But then the question is should we. What are the ethical considerations of creating such consciousness?
Here is a link to the wiki article, which is a good cover of the current AI world
http://en.wikipedia.org/wiki/Artificial_intelligence#Approaches
1
Maybe. First, it is worth pointing out that “intelligence” is a quite poorly defined concept. There are different types of intelligence and it is very hard to pin down what it actually means to be “intelligent”. There is an extensive literature on the misleading nature of IQ tests that seek to quantify intelligence. But anyway, the brain is nothing more than a couple of kg of material that supports fast and re-configurable electrical connections and memory of various kinds, controlling the body via more electrical signalling. So, it is quite easy for me to imagine the construction of an artificial brain with synthetic connections. But then it would have to be programmed to some extent and set into a condition such that it could learn and become self-aware. At which point you have an ethical dilemma. Could you shut down such a device? Would that be murder? What rights should such a device have? All sci-fi stuff, but I can’t see why this should not be possible from a technical point of view at some stage.
0
Hey jamesy. Who would change their batteries 😉
0
Hi jamesy,
It is certainly possible one day for there to be wide-scale production of robots with AI (machines that can think for themselves), but there are several ethical issues that would need to be to considered before doing this. You would have to think about the rights that they should be given, and things like whether turning them off would be considered murder/a crime. We would have to be very careful if we got to the point of co-existing with AI – they could probably overthrow humans, and I’m sure we definitely wouldn’t want to end up living in a world like in The Terminator movies! 🙂
0
The others have covered this pretty well but I would like to add one thing. A project was launched a little while ago to try and build a machine that could make a copy of itself. We are now nearly at that point with machines called 3D printers. You put in a 3D image and it prints an object using a variety of different techniques. What’s scary is that there are even computers now that can take your design and identify what is wrong with it and make improvements. This means that you could design a 3D printer that can make more copies of itself that are improvements on your design and every time the design becomes more and more improved and its all automatic!
0