Edge Computing in Remote and Hostile Environments: Biomimetic Strategies for Energy-Efficient Artificial Intelligence

By Jeffrey L. Krichmar

Edge Computing in Remote and Hostile Environments: Biomimetic Strategies for Energy-Efficient Artificial Intelligence

The Artificial Intelligence (AI) revolution foretold of during the 1960s is well underway in the second decade of the twenty first century. Its period of phenomenal growth likely lies ahead. AI-operated machines and technologies will extend the reach of Homo sapiens far beyond the biological constraints imposed by evolution: outwards further into deep space, as well as inwards into the nano-world of DNA sequences and relevant medical applications. And yet, we believe, there are crucial lessons that biology can offer that will enable a prosperous future for AI. For machines in general, and for AI’s especially, operating over extended periods or in extreme environments will require energy usage orders of magnitudes more efficient than exists today. In many operational environments, energy sources will be constrained. The AI’s design and function may be dependent upon the type of energy source, as well as its availability and accessibility. Any plans for AI devices operating in a challenging environment must begin with the question of how they are powered, where fuel is located, how energy is stored and made available to the machine, and how long the machine can operate on specific energy units. While one of the key advantages of AI use is to reduce the dimensionality of a complex problem, the fact remains that some energy is required for functionality. Hence, the materials and technologies that provide the needed energy represent a critical challenge toward future use scenarios of AI and should be integrated into their design.

Edge Computing in Remote and Hostile Environments

Trends in many human-built systems point to directions where sensing, processing, and actuation is situated on distributed platforms. The emerging Internet-of-Things (IoT) are cyber technologies (1), hardware and software, that interact with physical components in environments which may or may not be populated by humans. IoT devices are often thought of as the “edge” of a large, sophisticated cloud processing infrastructure. Processing data at the “edge,” reduces system latency by removing the delays in the aggregation tiers of the information technology infrastructure (2). In addition to minimizing latency, edge processing increases system security and mitigates privacy concerns when processing data in the cloud. In cases where the data path between the edge and user is very long, such computation can, by feature extraction, reduce the dimensionality and hence the expense of sending information. However, edge processing may be far away from power sources and may need to operate without intervention over long time periods.

Exploration of remote and hostile environments, such as space and deep ocean, will most likely require AI and ML solutions. These environments are inherently hostile to the circuitry that sub-serves current AI and ML technologies.

Unless human beings can be “radiation-hardened,” robotic space probes will continue to dominate exploration and exploitation of space in domains ranging from low earth orbit to interstellar exploration. All of these are subject to a variety of hazards which are potentially hazardous to CMOS-based AI. These include collisions with high energy photons (such as gamma rays), micrometeorites, planetary weather, and anthropogenic attacks. Planetary missions such as NASA’s Curiosity Mars rover have revealed additional challenges from weather (such as sandstorms) that have put missions at risk. Radioisotope thermal generation (RTG) power was added to the Curiosity Mars rover design to combat the vulnerabilities to solar energy systems on previous missions. While Curiosity’s computational systems do not constitute true AI, the power demands of the entire Curiosity rover (including drills and actuators) are of a similar order of magnitude. Recent concerns over limitations on the availability of radioisotopes combined with safety concerns during the launch phase will constrain future deep space missions that might use nuclear power.

As with Space, in deep ocean environments, power constraints are also a current challenge. Current non-nuclear powered Autonomous Underwater Vehicles (AUVs) have limited capabilities due to restrictions on energy storage and the availability of fuel sources. Furthermore, the extremely high pressures of deep-sea environments offer their own challenges, not only to energy supply for AI but also to the mass and construction of protection containers for the electronics. Ocean glider AUV’s use buoyancy engines with fins to convert force in the vertical direction to horizontal motion. While very slow, such AUV’s are far less energy-constrained than other current technologies. However, the power generated by such engines is not currently suitable for powering AI systems. Batteries are used for such functions and must be recharged at the ocean surface using photo-voltaic cells. There are proposals to use nuclear fission power generation to enable deep-sea battery recharging stations for military AUV’s, though these remain at the development stage and have similar safety considerations to those mentioned above for space.

In many of these domains, AI will be the preferred computational modality because of latency issues related to long-distance communication with Earth-based controllers. Operating in such domains will have the additional challenge of energetic constraints because readily available solar power may not always be available in domains such as Earth’s moon, solar system planets with weather and deep space (including interstellar). The primary alternative energy source for such domains is nuclear (both fission and fusion-based). Such power sources are in contrast to the current radioisotope thermo-electric technologies used for missions such as the Mars Curiosity Rover. While break-even fusion power has yet to be demonstrated on Earth, the abundance of fusion fuels in the solar system makes such power sources attractive. In all these cases, the nuclear technology must have a similar resiliency to that of the AI in terms of hazards, and it will be optimal to consider such requirements holistically at the design stage.

Current Solutions to AI’s Energy Requirements

With the growth of the Internet, data traffic (traffic to and from data centers) is escalating exponentially, crossing a zettabyte (1.1 ZB) in 2017 (3). Figure 1 shows this trend. Currently, data centers consume an assessed 200 terawatt hours (TWh) each year equivalent to 1% of global electricity demand. A 2017 International Energy Agency (IEA) report noted that with the ongoing explosion of Internet traffic in data centers, electricity demand will likely to increase by 3% (4). While it is difficult to estimate the exact role of AI within data centers, analysis from reports suggests that it is non-trivial— on the order of 40% (see Figure 1). For example, Google projected in 2013 that people searching by voice for three minutes a day using speech recognition deep neural networks would double their datacenters’ computation demands, and this was one impetus for developing the Google TPU. Additionally, Facebook has stated that machine learning is “applied pervasively across nearly all services” and that “computational requirements are also intense”.

The rise of highly efficient data factories, known as hyperscale facilities, use an organized uniform computing architecture that scales up to hundreds of thousands of servers. While these hyperscale centers can be optimized for high computing efficiency, there are limits to growth due to a variety of constraints that also affect other electrical grid consumers. However, the shift to hyperscale facilities is a current trend, and if 80% of servers in US conventional data centers were moved over to hyperscale facilities, energy consumption would drop by 25%, according to Lawrence Berkeley National Laboratory report, 2016.

One way the hyperscale centers have cut down their power usage is through efficiencies in cooling. By locating in cooler climates, the data centers can ingest the cool air outside with positive results. Another solution is employing warm water-cooling loops, a solution tuned for temperate and warm climates. An innovative solution to address the energy constraints of AI systems is to employ an AI-powered cloud-based control recommendation system. For example, Google employs a cloud-based AI to collect information about the data center cooling system from thousands of physical sensors prior to feeding this information into deep neural networks. The networks then compute predictions for how different combinations of possible activities will affect future energy consumption.

Although hyperscale centers and smart cooling strategies can lower energy consumption, these solutions do not address applications where AI is operating at the edge or when AI is deployed in extreme conditions far away from convenient power supplies. We believe that this is where future AI systems are headed.

Our view is that there is a pressing need to address the energy issue as it applies to the future of AI and ML. While there is a growing research effort toward developing efficient machine learning methods for embedded and neuromorphic processors, we recognize that these methods do not address the full needs of future applications, despite offering compelling first steps. Generally, current methods modify existing techniques rather than develop de novo algorithms.

In this paper, we emphasize how biology has addressed the power consumption problem, with a particular focus on energy efficiency in the brain. Furthermore, we look at non-neural aspects of biology that also lead to power savings. We suggest that these strategies from biology can be realized in future AI systems.

Existence Proof, Human Brains as Efficient Energy Consumers

The original goal of AI was to extract principles from human intelligence. On the one hand, these principles would allow for a better understanding of intelligence and what makes us human. On the other hand, we could use those principles to build intelligent artifacts, such as robots, computers, and machines. In both cases, the goal is to use human intelligence as a use case, which derives from the function of the brain. We believe that there are also important energy efficiency principles that can be extracted from neurobiology and applied to AI. Therefore, the nervous system can provide much inspiration for the construction of low power intelligent systems.

The human nervous system is under tight metabolic constraints. These constraints include the essential role of glucose as fuel under conditions of non-starvation, the continuous demand for approximately 20% of the human body’s total energy utilization, and the lack of any effective energy-reserve among others. And yet, as is well known, the brain operates on a mere 20 W of power, approximately the same power required for a ceiling fan operating at low speed. While being severely metabolically constrained is at one level a disadvantage, evolution has optimized brains in ways that lead to incredibly efficient representations of important environmental features that stand distinct from those employed in current digital computers.

The human brain utilizes many means to reduce functional metabolic energy utilization. Indeed, one can observe at every level of the nervous system strategies to maintain high performance and information transfer, while minimizing energy expenditure. These range from ion channel distributions, to coding methods, to wiring diagrams (connectomes). Many of these strategies could inspire new methodologies for constructing power efficient artificial intelligent systems.

It has been suggested that the brain strives to minimize its free energy by reducing surprise and predicting outcomes. Thus, the brain’s efficient power consumption may have a basis in thermodynamics and information theory. That is, the system may adapt to resist a natural tendency toward disorder in an ever-changing environment. Top-down signals from downstream areas (e.g., frontal cortex or parietal cortex) can realize predictive coding. In this way organisms minimize the long-term average of surprise, which is the inverse of entropy, by predicting future outcomes. In essence, they minimize the expenditures required to deal with unanticipated events. The idea of minimizing free energy has close ties to many existing brain theories, such as the Bayesian brain, predictive coding, cell assemblies, and Infomax, as well as an evolutionary-inspired theory called Neural Darwinism or neuronal group selection. For field robotics, a predictive controller could allow the robot to reduce unplanned actions (e.g., obstacle avoidance) and produce more efficient behaviours (e.g., optimal foraging or route planning). For IoT and edge processing, predictions could reduce communication data. Rather than sending redundant predictable information, it would only need to “wake up” and report when something unexpected occurs.

In summary, the brain represents an important existence proof that extraordinarily efficient natural intelligence can compute in very complex ways within harsh, dynamic environments. Beyond an existence proof, brains provide an opportunity for reverse-engineering in the context of machine learning methods and neuromorphic computing.

Energy Efficiency Through Brain-Inspired Computing

A key component in pursuing brain- and neural- inspired computing, coding, and neuromorphic algorithms lies in the currently shifting landscape of computing architectures. Moore’s law, which has dictated the development of ever-smaller transistors since the 1960s, has become more and more difficult to follow, leading many to claim its demise. This has inspired renewed interest in heterogeneous and non-Von Neumann computing platforms, which take inspiration from the efficiency of the brain’s architecture. Neuromorphic architectures can offer orders-of-magnitude improvement in performance-per-Watt compared to traditional CPUs and GPUs. However, the benefit of neuromorphic computing can and will depend on the application chosen. For modeling biological neural systems, the performance improvements already may be considerable (e.g., the Neurogrid platform claims 5 orders of magnitude efficiency improvement compared to a personal computer. Additionally, neural approaches enable IBM’s TrueNorth chip to power convolutional neural networks for embedded gesture recognition at less than one Watt (5). In another comparison, a collection of image (32 × 32 pixel) benchmark tasks on TrueNorth resulted in approximately 6,000 to 10,000 frames/Second/Watt, whereas the Nvidia Jetson TX1 (an embedded GPU platform) can process between 5 and 200 (ImageNet) frames per second at approximately 10–14 Watts net power consumption (6). Although we note that it is difficult to have fair network and dataset parity across platforms, and that neuromorphic systems supporting even millions of neurons may be too small-scale for application-level machine learning problems.

Neuromorphic architectures refer to a wide variety of computing hardware platforms (7), from sensing, analog to digital. However, in most cases the defining characteristics take inspiration from the brain:

  1. Massively parallel, simple integrating processing units (neurons)
  2. Sparse and dynamic low-precision communication via “spikes.”
  3. Event-driven, asynchronous operation.

This event-driven, distributed, processor-in-memory approach provides robust, low-power processing compatible with many neural-inspired machine learning and artificial intelligence applications. Hence, size, weight and power (SWaP) constrained environments, such as edge and IoT devices, can leverage increased effective remote computation capabilities and provide real-time, low-latency intelligent and adaptive behavior. Moreover, the often noisy nature of learned artificial intelligence systems may lead to more robust computation in extreme environments such as space.

Conclusions

AI is on a trajectory to fundamentally change society in much the same way that the industrial revolution did. Even without the development of General Artificial Intelligence, the trend is toward human-machine partnerships that collectively will have the ability to substantially extend the reach of humans in multiple domains (e.g., space, cyber, deep sea, nano). However, as with many things, there is no free lunch: AI will require energy inputs that we believe must be accounted for at all stages of the AI design process. We believe that such design solutions should leverage the solutions that biology, especially the human brain, has evolved to be energy efficient without sacrificing functionality. These solutions are critical components to what we call intelligence.

AI has the potential to change society drastically; the evolving human-machine partnerships will substantially extend the reach of humans in multiple domains. For this to happen, AI will require energy inputs that must be an early component of future integrated AI design processes. A coherent strategy for design solutions should leverage the solutions that biology, especially biological brains, offers in maintaining energy efficiency and preserving functionality. This strategy advances the following recommendations to ensure both private and government support for research and innovation.

References

  1. Atzori L, Iera A, Morabito G. The internet of things: A survey. Computer networks. 2010;54(15):2787-805.
  2. Hu YC, Patel M, Sabella D, Sprecher N, Young V. Mobile edge computing—A key technology towards 5G. ETSI white paper. 2015;11(11):1-16.
  3. Andrae A, Edler T. On global electricity usage of communication technology: trends to 2030. Challenges. 2015;6(1):117-57.
  4. Cozzi L, Franza V. Digitalization: A New Era in Energy: Extract form Digitalization & Energy Report. EEJ. 2017;7:44.
  5. Amir A, Taba B, Berg D, Melano T, McKinstry J, Di Nolfo C, et al., editors. A low power, fully event-based gesture recognition system. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2017.
  6. Canziani A, Paszke A, Culurciello E. An analysis of deep neural network models for practical applications. arXiv preprint arXiv:160507678. 2016.
  7. Schuman CD, Potok TE, Patton RM, Birdwell JD, Dean ME, Rose GS, et al. A survey of neuromorphic computing and neural networks in hardware. arXiv preprint arXiv:170506963. 2017.

 

This is an excerpt of the journal article: Making BREAD: Biomimetic Strategies for Artificial Intelligence Now and in the Future, by Jeffrey L. Krichmar, William Severa, Muhammad S. Khan, and James L. Olds. Published: November 2019 in Energy Strategy Reviews, 26, 100409; DOI: 10.3389/fnins.2019.00666 under a Creative Commons Attribution License (CC BY 4.0). The work was supported by USAF grant FA9550-18-1-0301

 

Jeffrey L. Krichmar
Professor

Jeffrey L. Krichmar received a Ph.D. in Computational Sciences and Informatics from George Mason University in 1997. He spent 15 years as a software engineer on projects ranging from the PATRIOT Missile System at the Raytheon Corporation to Air Traffic Control for the Federal Systems Division of IBM. He currently is a professor in the Department of Cognitive Sciences and the Department of Computer Science at the University of California, Irvine.