ABOUT 5G MARINE

New directions for the
world's marine stakeholders.

Artificial Intelligence and Unmanned Surface Vessels


The Short Course

By: Phillip S. Olin for 5G International, Inc
Contents
Introduction 1
Defining Intelligence and AI 2
Brain Models and the early history of Computer Simulation of Thinking 3
The Turing Test 4
Inside the Concept of Machine intelligence 5
Deep Learning 6
DARPA and Deep Learning, 2018 7
and Language 8
Language and Robowars 9
US law and unmanned armed drones 10
5G/Robosys USVs and AI in Practice 11
Conclusion 12
The Future of AI 13
Copyright Notice and Reprint Permission 14
ADDENDUM The Layered Human Visual System 15
REFERENCES and ENDNOTES 16

Introduction

The 5G International Inc. group has been designing and building surveillance robots, USV’s for over 3 decades, starting with our first USV, the Owl vehicle patented in 1986. (Image left) Our customers include governments and major corporations and our work has been featured in major publications world-wide. For the past decade, we, along with Robosys Automation and Robotics, have been building operational Artificial Intelligence (AI) into marine, unmanned systems. In this article, we will define AI, along with related terms and concepts, recount what we have completed with regard to AI, and explain our ongoing work. Artificial Intelligence, as an exploding field, is often-referenced in 2018, and even Vladimir Putin has joined the public mix:

OWL

Artificial intelligence is the future, not only for Russia, but for all humankind ... whoever becomes the leader in this sphere will become the ruler of the world.” Artificial Intelligence (AI) and its sister technologies will be the engine behind the fourth industrial revolution, which the World Economic Forum described as “unlike anything humankind has experienced before.” (1) ~ Vladimir Putin

To have a useful discussion about 5G/Robosys’ AI activities, we will begin by explaining what AI is and start the context with computational brain models. It would be helpful if we could provide precise definitions of the terms used in this article, like ‘algorithm’, ‘machine learning’ and ‘deep learning’ but while many use the terms on a daily basis, it turns out that there are substantial differences in definitions. What we provide here is a general framework in ordinary language that will facilitate discussions using agreed upon terminology.

As we proceed, we will also delve into some language theory because of the increasing tendency of many to use voice recognition and interaction as criteria for machine intelligence. We do not use voice recognition in 5G USVs and a bit of language theory will explain why.

Looking into language theory, as a subset of pattern recognition in general, will help the reader understand some of the major issues that are faced by AI software developers. It is important to understand why the use of voice or image recognition needs to be introduced into critical unmanned vehicle applications very carefully.

Defining Intelligence and AI

To begin, If you Google AI, constrained with quotation marked Boolean modifiers, you will get over 2 billion results. A good place to start for readers who are not experienced in the field is the Wikipedia entry for AI. A telling portion of the entry says:

“The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring "intelligence" are often removed from the definition, a phenomenon known as the AI effect, leading to the quip "AI is whatever hasn't been done yet.” For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology”. (2)

As the AI field improves and expands, it is a rapidly moving target, with “fuzzy” edges that pundits, analysts, and the press confront daily as AI evolves. AI practitioners in the business, on the other hand, just keep doing their work, writing and applying code to practical problems, generally leaving the description of what they do to others. There is a consensus in the industry that many or most AI programmers write deep learning code in the form of advanced neural networks which can be successful at a task, but they do not understand, a priori, how or what their code does to achieve the result. We will begin describing AI with a look at our human intelligence, before moving on to machines. We can safely say that the human variety of “Intelligence” is a function of the human brain and sensory system. Our eyes, ears and tongue, etc. collect data which is transmitted to the brain where it is organized, sorted and committed to memory, then thoughtfully used to control physical actions like evading a predator or shooting a bow … New data is received from successive, interactive trials in the environment. These successive physical actions are compared with a mental model of a desired result and future actions are adjusted to acquire a skill… e.g. Shooting an arrow directly at a flying duck = miss; Shooting an arrow in front of a duck = food for the fire. This intelligent hunting skill is a learned, iterative process.

Brain Models and the early history of Computer Simulation of Thinking

To understand how humans work, a logical first step was to replicate human mental processes. Enter the computerized brain model … or artificial neural network. Brains, in part, consist of more than 100 billion neurons (image below left from Wikimedia (3)

The human brain has a huge number of synapses. Each of the 1011(one hundred billion) neurons has on average 7,000 synaptic connections to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5 x 1014 synapses (100 to 500 trillion).

Neuron

Computerized models of brains per se are “computational models” which developed concurrently alongside useful digital computers. An early building block was the McCulloch-Pitts Neuron Model, conceived in 1943. (4)

McCulloch and Pitts, along with others, theorized that since the brain was full of interconnected neurons and known to be the seat of learning, a trace, or engram (#) must be recorded inside the neuronal web … i.e. to commit something to memory, certain pathways had to be “burnt in” The hypothesis was that perceived patterns resulted in an increased probability that some individual neurons would develop a lower (or relatively higher) threshold for being fired than others. (#) An artificial neuron is a mathematical function built into physical devices, like a relay triggered by pulses or threshold voltages. Early theorists thought that all brain processes might be localized, linear traces, but more recent theories hold that human memory and consciousness are more holographic and researchers have physiological evidence to support their claim. (5)

A subsequent advance beyond McCulloch and Pitts was Frank Rosenblatt’s Perceptron, built at Cornell in 1956-7. It was the first neuronal model in hardware which demonstrated the ability to learn, producing a standard output from variable input in the form of handwritten characters. (6). Rosenblatt’s Perceptron had a structured network in the digital domain but no specific program … i.e. no formal algorithm. It was an experiment.

These were groundbreaking feasibility studies that spawned the machine intelligence field and helped found the multi-billion-dollar industry it is today. Computerized neural networks have been in use for over 60 years and are an important element in current work. To date however, the research progress in computational modeling for whole organic brains has us somewhere between fruit flies, ants, and roundworms … with working models of a rat brain and simulation of higher order information processing somewhere off in the future.

How Do Neural Networks differ from Conventional Computing?

“To better understand artificial neural computing it is important to know first how a conventional 'serial' computer and its software process information. A serial computer has a central processor that can address an array of memory locations where data and instructions are stored. Computations are made by the processor reading an instruction as well as any data the instruction requires from memory addresses, the instruction is then executed, and the results are saved in a specified memory location as required. In a serial system (and a standard parallel one as well) the computational steps are deterministic, sequential, and logical, and the state of a given variable can be tracked from one operation to another.

In comparison, ANNs are not sequential or necessarily deterministic. There are no complex central processors, rather there are many simple ones which generally do nothing more than take the weighted sum of their inputs from other processors. ANNs do not execute programed instructions; they respond in parallel (either simulated or actual) to the pattern of inputs presented to it. There are also no separate memory addresses for storing data. Instead, information is contained in the overall activation 'state' of the network. 'Knowledge' is thus represented by the network itself, which is quite literally more than the sum of its individual components”. (7)

Digital simulation of the human brain has many future bridges to cross. Mapping only neurons and synapses onto hardware analogs as physiological research evolves is inadequate. In addition to neurons, 50% of brains are built from glial cells which constantly wrap and unwrap around neurons. Their functions are currently little-understood by neurophysiologists. (Video #) The fact is that organic brains are very complex pieces of meat, bathed in a constantly changing chemical soup of transmitter substances, nutrients and hormones. Cognitive processes like perception and memory seem to involve waves of chemical interactions that travel laterally across synapses, in addition to the linear signals that travel along axons and dendrites to subsequent ones. Observing these processes in detail is complicated by constantly-changing sensory input which interacts with neural activity to generate behavior. We are ”wetware” (8) and we predict that before computational brain models are operating in any satisfactory way that simulates or replicates us conscious humans, theorists will run smack into knotty problems including (but not limited to) quantum chemistry (9), quantum hydrodynamics, (10) and the functions of RNA. (11)

It will be fun to watch the process, but computational, whole brain models aren’t of particular, current use to us in the autonomous vehicle business, where the primary objectives are safe, efficient routing and on-the-fly data acquisition, analysis and transmission. The AI problems we tackle are pattern recognition, predicting weather, generating and updating maps and optimizing data collection. Especially where swarms are involved, emulating these human-like skills are hard enough. Efforts to model the human brain’s function have been ongoing for decades. Over two hundred years ago, Luigi Galvani demonstrated “animal electricity” by making a dead frog’s leg twitch with a spark. (12) Over a century ago, it was understood that the human nervous system is basically an integrated system of neurons (nerve cells) and synapses (spaces between neurons) connected chemically. (13) The way neurons actually work chemically can be seen in: Animation: Neuron action potential (14) Aside: A misconception about brain function is that it is electrical activity. It is not. The mistaken notion originated from the fact that brains can be inter-cranially stimulated with electrodes, producing thoughts, images smells, etc. Neurosurgeon Wilder Penfield was the pioneer. (15)

Frequently, researchers will tout the idea that someone wearing a hat full of electrodes and a virtual reality visor will be able to control an unmanned vehicle and see what it sees … i.e. telepresence. Some, even MIT, claim that these electrode hats will be able to display a subject’s thoughts on a computer screen. Here’s a tip for aspiring USV researchers: Detecting from, and transmitting to, trillions of microscopic chemical processes from outside the scalp, through skin, bone, protective layers, fluids and supporting tissues is impossible. Here is a recent, extravagant claim in the press based on an MIT release: “Silent headset lets users quietly commune with computers” .

"The motivation for this was to build an IA device – an intelligence-augmentation device," According to MIT, this makes communicating with a computer silent and completely private. One example of the benefits of such a system is is being able to use a computer as an aid to beating a chess opponent by silently communicating moves to the device and receiving advice surreptitiously. A more ethical use would be on the deck of an aircraft carrier, where it's normally too noisy to either speak or hear.” (16)

MIT

The “weasel” terms to note in the article include: “the hope is” and “some day”. Here's how New Scientist (4/5/18) entices readers on the MIT press release: “Google with your mind. A mind-reading device can answer questions in your head.” The point to recounting this brief history of brain models along with “click bait” misapplications is so the reader will have a concept of what, for practical, current robotic purposes, AI can’t do.

The Turing Test

5G/Robosys claim artificial intelligence for developed, working USVs, and we’ll invoke the Turing Test to support the claim. Alan Turing (1912-1954) was an English mathematician with a PhD from Princeton. He was credited with substantial contributions to computer science, both in theory and in practice. His work included cracking the German Enigma encryption machine in WWII, using computers of his own design and was credited with saving over 10 million lives. He devised the “Turing Test” (17) to answer the question: Can computers think? … Actually, rather than answer the question, he wisely evaded it. The test basically moots the need for describing complex biological information processing or generating logical proofs to somehow prove a computer is thinking. (Another aside: anyone interested in proving computers can think like conscious humans needs to solve the old solipsism problem.) (18) The Turing Test is simple, but remains broadly-used today:

“Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.” (17)

The ability of a computer to collect massive data sets (e.g. images, Facebook profiles, oceanographic or Hubble data), then process, sort and present results at superhuman speeds is often referred to as intelligent but is probably better described as a machine learning process. If our USVs can autonomously retrace a corrected course when telemetry is lost, or the vessel can return home when a fuel tank is half empty or they can modify a pre-programmed course to avoid threats or obstacles along the way … it might appear that there is a human on board. These are the types of functions that, especially when combined, we call intelligence, as passed by the Turing Test, and what we build into USVs.

Machine “Intelligence”

Let’s have a look at what’s inside the concept of machine intelligence, starting with algorithms.

"A computation is a process whereby we proceed from initially given objects, called inputs, according to a fixed set of rules, called a program, procedure or algorithm, through a series of steps and arrive at the end of these steps with a final result, called the output. The algorithm, as a set of rules proceeding from inputs to output, must be precise and definite with each successive step clearly determined.” Soare, 1995 in Wikipedia (19)

A mature application that makes machines seem smart is image processing, used daily by people around the world in their ubiquitous cell phones and programs like Photoshop. A major processing tool is the venerable Fourier transform, used in enhancing and compressing images. (Enhanced example shown left, pre-processed shown right.). Here’s an exposition of the concepts and the math: INTRODUCTION TO FOURIER TRANSFORMS FOR IMAGE PROCESSING. (20) While machines are commonly described as smart if they can perform calculations at superhuman speeds, the application of Fourier transforms to images for sharpening, color manipulation, etc. is just fancy but brute force arithmetic.

goofy

Machine Learning

Machine Learning is the term currently used for a next level beyond algorithmic parsing and manipulating data, where this ability to compute, tabulate results, and then modify and improve an algorithm is one of the criteria for machine learning in autonomous systems. An example of machine learning for unmanned surface vessels is finding the least elapsed time to get from Point A to Point B via varying conditions of wind, waves, tide, current and obstacles. The problem may be simple or complex and how well it is solved is a measure of its “IQ”.

Myth: Machine learning is AI. Machine learning and artificial intelligence are frequently used as synonyms, but while machine learning is the technique that’s most successfully made its way out of research labs into the real world, AI is a broad field covering areas such as computer vision, robotics and natural language processing, as well as approaches such as constraint satisfaction that don’t involve machine learning. Think of it as anything that makes machines seem smart. None of these are the kind of general “artificial intelligence” that some people fear could compete with or even attack humanity. Beware the buzzwords and be precise. Machine learning is about learning patterns and predicting outcomes from large data sets; the results might look “intelligent” but at heart it’s about applying statistics at unprecedented speed and scale. (21)

Deep Learning (DL)

Deep learning, by computer scientists’ consensus, is the application of algorithms that apply layers of processing where successive layers use the output from previous layers as input. Layers can include multiple algorithms which are invoked or not. DL can be supervised (e.g., classification of purpose-built data sets for training) and/or unsupervised (e.g., analyses of unstructured patterns from the real world). Deep learning machines layered artificial neural networks may include formulae, as in structured algorithms, or “latent” variables that will be “self-modified” (in some sense) by data that flows through them, and by virtue of their logical and mathematical architecture. This process harkens back to the original McCulloch-Pitts neuron model and Rosenblatt’s Perceptron. The layered architecture is typically looped, or recursive, in a winnowing and/or refining process. Deep learning software development is very experimental. Code is written and run without the code writer knowing what the end product will be. This is “Black Box AI”. As black box AI is put into practice, it has become highly problematic. Big dollar players like Microsoft and Google have had major problems with AI bias as they put it out into the wild for real world testing. As examples,

“… much has been made about the launch and (temporary) shutdown of Microsoft’s chatbot Tay. For those of you who might not know, Tay is a machine learning project that was launched with the goal of conducting research and development in the field of conversational understanding. It’s a bot that can chat with users online, and it has presence over several platforms, including Twitter, GroupMe and Kik. Tay is programmed to mimic the behavior of a young woman, tell jokes and offer comments on pictures, but she’s also designed to repeat after users and learn from them in order to respond in personalized ways… Unfortunately, Tay was shut down shortly after her launch because she was found to make racist and offensive comments. Apparently, the quirks in the bot’s behavior were capitalized by a subset of users to promote Nazism and attack other Twitter users.” (22)

Another example is Google’s efforts with their picture sorting, well-publicized as having misidentified black human faces as gorillas. (23) Similarly, results for CEO face searches returned a preponderance of white males. There is a massive amount of information online regarding research to correct bias. As elegant and clever as DL code might be, its ability to classify images it scrapes from the web has proven highly problematic. There are serious implications for the unmanned vehicle industry. Consider these types of questions:

1. Fact: Most CEOs are white males. Should reverse discrimination be built into DL algorithms to “correct” factual search results? Why, how, and by whose authority?

2. How does a DL-equipped, autonomous hunter-killer robot determine who or what needs to be destroyed?

3. Ethics, in some sense, must be incorporated into autonomous vehicles. If a robot is confronted simultaneously with the unavoidable choice of hitting either a young girl or an old man, who does it hit?

4. When an autonomous car runs a stop sign, who gets the ticket?

For this article, two major points we would like to make are: 1. how complicated the biological systems are that DL theorists and code writers are attempting to replicate, and 2. state of the art DL developers are constrained to black box experiments that have unintended consequences. Bias in DL results can’t be blamed on AI programmers for creating a neutral network that is organized by real world data. An interesting question is how DL systems can output politically-correct results to satisfy moral activists. Is it possible to develop an AI system that generates an ethical layer from real world data to filter results? As a practical matter, probably not. If it is even possible to create a table of ethics to filter results, who gets to write it? … a Google committee? In the future, deep learning machines will be used (and needed) to analyze themselves and other deep learning machines, and explain the results so that we, the people who build them, provide housing and pay for their electricity can understand what they are doing. Given that humans are currently in a major cyber war world wide, the stakes couldn’t be higher.

The best deep learning machines applied to financial markets, medicine and genetics, resource management and military strategy will eventually dominate the planet. The process will be Darwinian and there will be many losers along the way. The winner may be Google, IBM’s Watson, China, Russia, or Facebook with their vast resources, or … a small team in Nigeria. Billions are being spent on financial AI research. If an expert, superior AI stock, bond and currency trader is developed when does it stop taking profits? When it has all the money?

DARPA and Deep Learning, 2018

DARPA put out a “Broad Agency Announcement” (BAA) in 2016 for Explainable Artificial Intelligence (XAI) that seeks a quintessential, high-level example of deep learning, applied to the general evaluation of other AI systems. (24) Note: In 2018, evaluation of stand-alone AI systems or those incorporated into hardware, is a major part of what DARPA employees do, but we doubt they will worry that useful XAI will make them redundant and compromise their job security ;>)

“1. Funding Opportunity Description (25)

DARPA is soliciting innovative research proposals in the areas of machine learning and human computer interaction. The goal of Explainable Artificial Intelligence (XAI) is to create a suite of new or modified machine learning techniques that produce explainable models that, when combined with effective explanation techniques, enable end users to understand, appropriately trust, and effectively manage the emerging generation of Artificial Intelligence (AI) systems. Proposed research should investigate innovative approaches that enable revolutionary advances in science, or systems. Specifically excluded is research that primarily results in evolutionary improvements to the existing state of practice.”

“A. Introduction

Dramatic success in machine learning has led to an explosion of new AI capabilities. Continued advances promise to produce autonomous systems that perceive, learn, decide, and act on their own. These systems offer tremendous benefits, but their effectiveness will be limited by the machine’s inability to explain its decisions and actions to human users. This issue is especially important for the Department of Defense (DoD), which is facing challenges that demand the development of more intelligent, autonomous, and symbiotic systems. Explainable AI will be essential if users are to understand, appropriately trust, and effectively manage this incoming generation of artificially intelligent partners.”

According to MIT Technology Review, “The U.S. Military Wants Its Autonomous Machines to Explain Themselves” (24) … 13 awards were granted. The awards were up to $8M, and it’s nice to see DARPA working on the holy grail of deep learning … applying deep learning to understanding deep learning itself, and then “explaining” the results! Referring back to the Vladimir Putin quote in our Introduction, the stakes couldn’t be higher in the current world-wide cyber war. We would suggest again that this DARPA effort is far under-funded! The Chinese are reportedly investing $4.5 Billion (USD) into AI R&D!

“Deep learning is especially cryptic because of its incredible complexity. It is roughly inspired by the process by which neurons in a brain learn in response to input. Many layers of simulated neurons and synapses are labeled data and their behavior is tuned until they learn to recognize, say, a cat in a photograph. But the model learned by the system is encoded in the weights of many millions of neurons and is therefore very challenging to examine.” (DARPA ibid)

Part of the pattern recognition system referenced by DARPA is the layered human visual system. So that we can move along here, we provide detailed diagrams at the end of this article as an Addendum vs in the text.

AI and Language

One of the top rungs on the artificial intelligence ladder is using natural human language to speak and/or or write. Applications might be describing a set of facts in the physical world or a presentation of abstract AI-developed hypothetical constructs. Trivial examples are reporting oil pressure, reporting ETA for a UAV, or Alexa telling me she has turned on a power strip in my shop. A higher level might be to “discuss” various protocols for scientific data acquisition and likely outcomes for each. A logical criterion for DARPA’s XAI success would be a machines’ ability to discuss black box DL experiments with humans in ordinary language. But … state of the art, human-machine interfaces by voice are problematic. Voice typing has been researched since Bell Labs started in the 1950s. Despite Google, Microsoft, Apple and many university efforts, voice typists still experience issues with “four”, “fore”, “4”, and “for”. All significant software companies like Google, Microsoft and Apple corral users into the “cloud” so they can maximize the effectiveness of big data collection. Despite billions of available cash for research with a massive potential ROI, they remain unable to provide trouble-free voice recognition. Deep learning machines, with the potential to out-think humans, must be able to communicate with us to be useful. The use of ordinary language (whether in speech or writing) is one of the most difficult challenges faced by deep learning developers. The problem of how language is understood, used and maps onto the real world has involved extensive work by philosophers like Gottlob Frege, Bertrand Russell, and Ludwig Wittgenstein:

The Tractatus Logico-Philosophicus is the only book-length philosophical work published by the Austrian philosopher Ludwig Wittgenstein in his lifetime. The project had a broad aim – to identify the relationship between language and reality and to define the limits of science – and is recognized as a significant philosophical work of the twentieth century. The central argument of Wittgenstein's Philosophical Investigations centers on a devastating rule-following paradox that undermines the possibility of following rules in our use of language. Kripke writes that this paradox is "the most radical and original skeptical problem that philosophy has seen to date." (26) (Emphasis ours)

Deep learning theorists attempting to incorporate ordinary language capabilities must consider these well-explored philosophical arguments. These are not idle, ivory tower speculations. They are well-considered conclusions from very hard-nosed philosopher/logicians, many of whom have laid the foundations for modern mathematics, logic and computer science itself. From their work, it is probably true that it is not possible to create a set of rules for language that can be put into useful algorithms. Therefore, the practical computer use of language must somehow evolve via deep learning applied to itself. Good luck to DARPA! Some of these issues are treated by John Launchbury, the Director of DARPA's Information Innovation Office (I2O) in a recent DARPA YouTube video: A DARPA Perspective on Artificial Intelligence. (27)

Language and Robowars

Understanding deep learning and language are of critical importance if we consider a “robowar”, say in the Spratly Islands, where there are many jurisdictions, alliances and languages in play. Scenario: Unarmed USVs on surveillance missions in historically-international, but currently-disputed waters come under attack … Weaponized drones and USVs are sent to defend them, and hostilities escalate from there involving manned missions. (Image: Maritime Executive) It’s trivial that vessels under attack can “talk” to a rescue fleet of armed USVs and send encrypted data packets containing ranges, speeds and positions of the enemy combatants. However, things become much more complicated when we add humans to the hostilities mix and start considering military strategies.

vietnam

US law and unmanned armed drones:

Current US policy states: "Autonomous … weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."… However, the policy requires that autonomous weapon systems that kill people or use kinetic force, selecting and engaging targets without further human intervention, be certified as compliant with "appropriate levels" and other standards, not that such weapon systems cannot meet these standards and are therefore forbidden... "Semi-autonomous" hunter-killers that autonomously identify and attack targets do not even require certification.[14] Deputy Defense Secretary Robert Work said in 2016 that the Defense Department would "not delegate lethal authority to a machine to make a decision", but might need to reconsider this since "authoritarian regimes" may do so. (28) (Emphasis ours)

By “authoritarian”, Deputy Defense Secretary Work is certainly referring to countries like China, Iran and North Korea, which are not constrained by the complex, time-consuming chains of command like the US. US Robowar policy is caught in what in chess is called a “fork” where an opponent’s move threatens two valuable pieces and a player is guaranteed to lose one of them. Here is the fork that the US faces: Either … incorporate autonomous lethality into unmanned systems … or … the US can keep its legal, ethical policy regarding “killer robots”, reducing response time to many minutes or days resulting in a major tactical disadvantage and probable loss of its fleet.

Game

Humans have long been unable to beat chess computer programs. Go is a game of capturing men and territory and there are some useful features and analogies which apply to marine warfare strategies. Recently, the world’s best Go players (incidentally Chinese) have fallen victim to computers. Per the Wikipedia entry, the lower bound on the number of legal board positions in Go has been estimated to be 2 x 10170 … a very large number. War is infinitely more complex than Go, and DARPA, along with their awardees, have some major hurdles to overcome to make XAI trustworthy. Natural human language Is involved for the time being. There is a chain of command above semi-autonomous military robots to which it provides situational awareness. This chain of command is cross-linked between the Army, Air Force, Navy, Coast Guard and Marines. Then, behind the military services are politicians, career diplomats and behind them are citizen-voters. Somehow, in a democracy like the United States, computerized data from the Spratlys will have to make it through this multiple layered data maze in a continuous decision-making loop and quickly. It is unlikely that autonomous land, sea, air or space vehicles will be trusted to participate in critical decision making anytime soon.

5G USVs in practice:

5G International teams with The Robosys Company in development, testing and manufacturing. Principals in both companies have collaborated on USVs since 2001. Below we present a summary for some of the capabilities in AI systems. Sensor Examples: Radar, (MARPA, ARPA and dynamic), Sonar (forward and down-looking), Lidar, PTZ Cameras including infrared, Roll/Pitch/Yaw, Speed, GPS, AIS, Oil Pressure, Water/Air/Engine Temperature, RPM, Fuel, Engine and Bilge, Alerts, digital compass, steering and throttle positions. Uses standard NMEA/J1939 and proprietary protocols. The Sensor Information Processing Unit (SIPU) integrates processed sensor data and transmits it to the remote operator, simultaneously making it available on-board for autonomous navigation when desired or in the event of data link loss, with complete autonomy avoiding all dynamic and static obstacles. Proportional–integral–derivative (PID) control is used to modify throttle commands based on proximity to obstacles and approach vectors. PID control increases or decreases the USVs speed as a function of distance from obstacles and/or as a function of calculated, vectorized traffic. The PID control system maximizes fuel efficiency and minimizes ETA. In remotely-operated mode, the system reduces tele-operator error by using it as a navigational aid. The SIPU can handle a wide variety of data from customer-requested sensor options. Example: Following an oil slick

The Centralized Grid Mapping System (CGMS) is used for pre-planning routes using a scalable, vectorized display of vessel traffic, pre-mapped or encountered hazards to navigation and implementation of the reactive, short range planning and re-planning component. The CGMS archives all data for post-mission analysis. The CGMS applies algorithms to reduce sensor noise and uncertainty; Maintains probabilistic map history and adapts to uncertainties arising due to sensor inaccuracies and fusion of sensor data; Generates a safety zone around obstacles which can be defined by the end-user depending on size of craft and situation. Out of the box, it is biased towards using safe-channel areas in dredged channels, but is not necessarily constrained to do so. 5G’s USVs are somewhat like hardware and software versions of us humans, with sensors, propulsion, computers and telemetry systems. Our goal is to exceed human performance on the oceans for USV tasks that are useful in science, commerce and international conflict. By many criteria, we have succeeded. 5G’s USVs can move more quickly, (65 knots) see farther across a wider spectrum than the human eye, map the bottom with sonar, taste water with ion-specific electrodes, measure radioactivity, and analyze air and water with laser backscatter and spectroscopic analysis. We can log these data at tens of thousands of times per second to use onboard for navigation, identifying threats or predicting weather. We can sort and transmit data around the world in near real time. These can be in a variety of formats, like raw data for off-line, remote analysis or, for human consumption, on-board computers can store all collected data or compress and assemble data into graphs, bar charts and 3D, multi-spectral volumetric maps for human consumption and save bandwidth.

Conclusion

Deep Learning is unquestionably the future of AI and the AI genie can’t be put back into the bottle, despite frantic calls from many pundits in the press. People who want AI to be banned or regulated should study the history of the internet, back to Arpanet. Arpanet was a top-secret DoD program developed during the Cold War. It was recognized that US defense systems could not rely on individual mainframe computers in the event of a nuclear attack so a resilient, secure network of networks was created, where the overall system could still function if one or several nodes were destroyed or disconnected. Arpanet was so convenient to use that it spread to defense contractors and universities. Generals and admirals rely on defense contractors who depend on university professors who rely on bright students. Beyond code, these “bright” students soon began exchanging porn by email and according to internet historians, the first ever on-line sale was “a bag of weed”. (29)

"Arpanet “escaped” from the DoD once the concepts and code were in the wild. It morphed into the internet that we know today. To quote John Gilmore; “The Net interprets censorship as damage and routes around it.” This is part of the Arpanet legacy of resilience via redundant, distributed computing systems. One of the interesting things about this quote is that it implicitly reifies the internet as a separate thing unto itself with a survival function. The Arpanet program was terminated in 1990". TIME Magazine, December 6, 1993

The fundamentals for neural networks are relatively simple. Below, we give an example of a tutorial, one of many that are freely and widely available.

Neural network how-to!! An excellent YouTube video with over 1.4 million views provides a graphic explanation of how layered neural networks are constructed and work. It begins simply, which should be suitable for all viewers, and then proceeds through the mathematics involved. Non-technical viewers might still find it worth their while to watch the whole video, but for aspiring AI code writers, there is sufficient information to start building their own systems. Links to freely-available, downloadable code, from MIT are included in the YouTube video notes. (30) Self-replicating neural networks are underway with a full head of steam. if you search Google for “self-replicating neural networks”, you will get ~3M results from Google Scholar. We find the often-expressed opinion that AI should somehow be banned or regulated to simply be ludicrous. You can imagine what would happen if Congress or the EU ever attempts to play AI whack-a-mole. They can't even use arithmetic to balance a budget let alone understand compound interest. The world is interconnected with copper, optical fiber and microwaves and artificial intelligence software is on the loose with built-in survival tools. The best we can do is fasten our seat belts and go along for the AI ride.

One of the interesting developments to watch will be China's multibillion-dollar AI efforts, with their top down, authoritarian approach. We began this article with a quote from Vladimir Putin who is highly-motivated and in a similar position to President Xi Jinping, i.e. able to control AI development. It seems to us that free enterprise and carte blanche development is intrinsically better-suited to advance AI to its full potential, but it may be that near-term military or financial applications developed by authoritarian regimes could swallow or starve the rest of the world’s AI efforts first.

AI is driving a huge amount of business activity and 5G/Robosys offer a wide range of proven designs for unmanned marine testing, The air/sea interface is a very complex, physically-demanding environment: Wind, current, waves, surface and sub-surface topography, and corrosive saltwater make the ocean a much more difficult environment than outer space with its benign, negative one atmosphere. It is way more complicated than some unmanned test track in Arizona or sunny southern California, but we have solved the major problems. Foam or inflatable-collared USVs are the ideal, economical platform for testing terrestrial transportation R&D AI. Scale, payload, electrical supply, range, endurance and durability are not problems for us. Interfacing with other’s test software is something we like to do. Any “reasonable” budgets are no problem. We can deliver robust, well-featured systems starting as low as $100,000 and are experienced with multi-million-dollar budgets for fully-serviced manufacturing and marketing programs with multiple models. We are ready!

Copyright Notice and Reprint Permission:

This document is copyright 2018 by 5G International Incorporated. This article is downloadable in PDF format, and may be freely-circulated, but only in its original form and unmodified. You may download the document here. Note: We track our work with plagiarism checking tools. 5G welcomes your constructive questions and comments. Email us.

Note From the author:

I was a junior in college in the 1960s, a psychology major with a focus on physiological psychology. I read a Scientific American cover article describing analog, fluid computers and had an epiphany, which was based on what I had learned about brain structures, I thought my God, these things will be able to think! Digital computers seemed too much of a stretch. When I discussed the AI concept with my professors, I was mostly ridiculed, however I have remained convinced. Looking forward to graduate school, there were no curricula available for computer simulation of thinking, or artificial intelligence in 1970. Sympathetic mentors directed me to philosophy of science. There still wasn't a great fit, but I did a master's degree and the thesis topic I was pushed into was Turing's Imitation Game. Frank Rosenblatt’s perceptron was my real interest including the logic of neural circuits. It seemed to me, naively, that if I could solve that problem. If I could figure out how to do a mind-meld with a computer, and live forever in some sort of weird way. Post my Masters, I spent two years at the University of London, delving into brains and computer logic but became convinced that no human brains had enough horsepower to solve the problem. I quit academia after 9 years and became a general contractor, which eventually turned into building USVs.

ADDENDUM The Layered Visual System

These images show a labeled section through the human brain and eyes and a section through the human retina. The images represent a small portion of the systems humans use for pattern recognition and do not show how visual data is integrated with other sensory system layers like hearing, smell, touch, and higher order intellectual functions.

brain

brain

REFERENCES

1.Putin Talks Power of Artificial Intelligence - MSN.com

2.Artificial intelligence - Wikipedia

3.https://commons.wikimedia.org/w/index.php?curid=7616130

4.http://aishack.in/tutorials/artificial-neurons-mccullochpitts-model/

5.https://en.wikipedia.org/wiki/Holonomic_brain_theory

6.https://en.wikipedia.org/wiki/Frank_Rosenblatt#Perceptron

7.http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html

8.https://en.wikipedia.org/wiki/Wetware_computer

9.https://en.wikipedia.org/wiki/Quantum_chemistry

10.https://en.wikipedia.org/wiki/Quantum_hydrodynamics

11.https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3820025/

12.https://en.wikipedia.org/wiki/Luigi_Galvani

13.https://en.wikipedia.org/wiki/Charles_Scott_Sherrington

14.https://www.youtube.com/watch?v=HYLyhXRp298

15.https://en.wikipedia.og/wiki/Wilder_Penfield also see

https://en.wikipedia.org/wiki/Electrical_brain_stimulation

16.https://newatlas.com/silent-headset-mit-alterego/54077/

17.https://en.wikipedia.org/wiki/Turing_test

18.https://en.wikipedia.org/wiki/Solipsism

19.https://en.wikipedia.org/wiki/Algorithm_characterizations#1995_Soare's_characterization

20.https://www.cs.unm.edu/~brayer/vision/fourier.html

21.https://www.itworld.com/category/artificial-intelligence/

22.https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/

23.https://www.theverge.com/2015/7/1/8880363/google-apologizes-photos-app-tags-two-black-people-gorillas

24.https://www.technologyreview.com/s/603795/the-us-military-wants-its-autonomous-machines-to-explain-themselves/

25.https://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf

26. https://en.wikipedia.org/wiki/Tractatus_Logico-Philosophicus also see

https://en.wikipedia.org/wiki/Ludwig_Wittgenstein#1953:_Publication_of_the_Philosophical_Investigations

27.https://www.youtube.com/watch?v=-O01G3tSYpU

28.https://en.wikipedia.org/wiki/Lethal_autonomous_weapon#Ethical_and_legal_issues

29.https://gizmodo.com/remember-how-the-first-thing-ever-sold-online-was-a-bag-1708799689

30.https://www.youtube.com/watch?v=aircAruvnKk


Back Home

THE HYDRA USV

Port, Harbor and Yacht Security


4 Meter, >180hp, Payload 200Kg, Speed 60mph/100kmh, 360 degree gyro stabilized day/night HD video camera, HD sonar, FMCW radar, with a full software and sensor suite.

THE OSCAR CLASS USV

Diesel Hybrid Electric Propulsion


9 Meter, 1000 HP, Diesel/Hybrid Electric Propulsion, Payload 500Kg, Speed +45 knots, 360 degree gyro stabilized day/night HD video cameras, HD sonar, FMCW radar, 2km Hailing device, with a full software and sensor suite.

THE BRAVO CLASS USV

Special Operations USV


11 Meter RHIB, 1000 HP, Payload 900 Kg, Speed +45 knots, Two 360 degree gyro stabilized day/night HD video cameras, HD sonar, FMCW radar, with a full software and sensor suite. Capable of carrying 6 crew members fully autonomously or in manual control.

COMMAND & CONTROL

Command and Control Systems


Fully customized Command and Control Stations for ground based, mobile, or shipboard operations. Redundant Built-in Video and Telemetry stations for pilot and mission officers. Self contained Power Supplies and Generators, HVAC, Self-rising antenna masts. Typical setup time for Operations 30 - 60 minutes.
Spread the love