Wednesday, September 25, 2019

Some brief questions about Artificial Intelligence (AI)


In our world today we have systems called Governments. We say that there are Democratic Governments and there are Dictatorships.
We all know that Democracy in its purest form is a Utopia. A benevolent Dictatorship is also a Utopia.  All governments boast 'controls' which supposedly are designed for the efficient running of each respective system. Invariably there are flaws involving efficiency, civil rights, and ethical management generally. And the old adage: "Power corrupts and absolute power corrupts absolutely" comes to mind.

So what can we do? Well, there is a concept called "Artificial Intelligence" or a.k.a. "AI". This has arisen from Cyber-Science which now forms an integral, indispensable part of life on this planet for most of its inhabitants.

This branch of science very much depends on 'algorithms'. In everyday terms, these are mathematical rules/formulae which lead to desired outcomes or should do. AI depends on algorithms which are embedded in such a way as to ensure a degree of learning and also creativity of the specific system.

In short, an AI unit is a Computer Plus; one which is capable of learning and performing creative actions based on its learning. Such would be like a 'brain' in a box but made of inanimate materials. Of course, to physically act, such a box would need appendages and tools to manipulate its environment in any effective way.

A line of progressive technological complexity could be seen as the following: abacus- calculator-computer-AI unit. This could stop here but if we give the unit tools, sufficient processing powers and the ability to replicate its parts (given that it is provided with the materials) we have something quite awesome!

The learning ability and the end capabilities merely depend on the algorithms built in the design. So it would seem that we would still have control over the progression. The key to all of this is: Who feeds in the algorithms and other design parameters?

It is my opinion that the dangers of AI are inherent in who is producing and setting the limitations on the degree of autonomy, learning capability, and creativity of the product. Will such units be run by Governments for advice? Will they be part of weaponry in conflicts? Will they be used as adjuncts to human brains or other body parts? How complex will they be allowed to exist?

The human race is messing up its own planet right now. Could AI be our salvation? Will we allow ourselves to trust in the potential learning and resulting advice from these AI units to save us from our own self-destructive actions?

Who actually will ever decide a workable course of action here?

Now there is another aspect to all this...Consciousness. Protoplasmic life on this planet includes many life forms which have varying degrees of consciousness. Humans, other primates and cetaceans are right at the top of the tree in this regard. Our sense of Ethics lends respect to such beings as these; even further down the tree of life.

We are now considering the ethics which might be applied to AI now and in the future. Are such units conscious?... I think so. Personally, I would go a step further and say that consciousness is something shared with all things which can accumulate and act upon memory as a reference.

We are our memory anyway. You could scroll down and read an article I wrote on this several years ago ("We are a Dream of the Past" Nov. 2011...See side menu). AI is, as are, computers, based upon a binary switching system of zeros and ones. Protoplasmic neurotransmission relies upon sodium/potassium shifts combined with protein and steroid dynamic synaptic connections. So, similar but with different chemicals.

If we choose to use AI in a positive controlled manner it could be very useful and might lead to our ultimate salvation.  There are pros and cons but here is the rub... there is great diversity in the agendas of world Governments. How can we be ever sure that some will not grossly misuse this technology?

Yes, we should walk very carefully, and slowly, in front of this new 'vehicle' waving our proverbial red flag. Control of the potential for rogue autonomy is essential here.

3 comments:

  1. One other thing to consider is if (and that's a BIG if) an AI actually achieves self-awareness, by what measure do we determine that? Turing test? Or something more with a higher level of vetting? I ask because IF a construct achieves that, I believe we are morally obligated to give that entity 'personhood'. Humans are woefully lacking in following through with that, sadly. Just look at the examples you mention above of non-human primates, cetaceans, and I'd argue even fellow humans who look different to their colonizers bear this out.

    ReplyDelete
    Replies
    1. There is a subtle difference between self awareness and mere consciousness which does not negate the need for respect of the latter from an ethical point of view. Self-awareness is being able to recognize 'Being' in a reflection. Primates and tested cetaceans seem to able to do this but the rest not,apparently. This does not mean that they are not conscious, therefore deserving respect (even cats....joke!)

      Delete
  2. Not too long from now we will be able to make an AI unit as powerful or more so in all aspects so ethics will a must to consider.

    ReplyDelete