Unlocking AI's Potential: The Power of Simple Questions

Advertisements

As we stand at the forefront of technology, feeling the pulse of artificial intelligence (AI) as it develops, we can’t help but reflect on the nature of human intellect – familiar yet mysterious, close at hand yet tantalizingly out of reachScientists and researchers find themselves pondering how we can bridge the gap, transforming machine intelligence to align with human levels, or even surpass themIn the insightful work, "Toward Human-Level Artificial Intelligence," chief analyst Eitan Michael Azoff from Kisaco Research argues that the key to unlocking superhuman intelligence lies in a methodical approach, starting from simple questions to gradually decode how the brain processes sensory information and executes cognitive tasks.

Despite the rapid and sometimes chaotic advancement of artificial intelligence, we must ask ourselves whether researchers have become sidetracked in their pursuit for complexity, perhaps forgetting the wise admonitions of Leon Cooper, a Nobel Prize-winning physicist, back in 1972. Are we chasing too much diversity and complication, straying down an incorrect path?

Cooper outlined three crucial pieces of advice that are relevant even today:

  • Do not rush to solve a complex problem, especially if a simpler version is unaddressed.
  • Be wary of blindly accepting things you cannot comprehend.
  • Beware of those who claim solutions reside within complexity, obscuring the real picture

    While this may occasionally hold true, often it is simply an excuse to avoid problem-solving.

The intricate workings of the human brain can often lead us astray, making it difficult to explain its mechanismsTo build true AI, we must first deconstruct tasks and organically comprehend how the brain encodes information, including how this information is communicated, to accomplish cognitive tasks such as thinking, learning, problem-solving, and integrating multisensory representationsSome may argue that this process is overly complex and nearly impossible to understand, but it’s critical not to succumb to this perspective.

Azoff, in his latest publication, categorizes intelligence systems into three distinct levels: animal level, human level, and superhuman level intelligence systemsWithin the human level, he further divides it into humanoid, engineering-based, and hybrid approaches

By elucidating these tiers, he aims to break free from the constraints of existing systems, simplify research pathways, and focus on solving core issues, thereby providing a refreshed framework en route to achieving human-level AI.

The first approach involves comprehending the human brain's operations, utilizing that understanding to shape a human-like AI, known as Human-like human-level AI (HL2AI). This strategy draws inspiration from neuroscience, representing our brain as the ultimate archetype we aspire to simulateIt’s here that neural studies become pivotal in guiding the creation of human-level intelligence systems.

On the other side of the spectrum, the second approach aligns with engineering principles, spanning knowledge engineering, computer science, and mathematics, termed Engineering Human Level AI (engHLAI). Modern electronic computations, with their superior functionalities — storage technology, long-memory capabilities, and rapid numerical calculations — exceed the brain's equivalent processes

Thus, leveraging contemporary information technology can empower engHLAI systems to develop formidable intelligence.

The third proposition merges the first two approaches, creating a hybrid strategy that combines humanoid AI with engineeringThis integration is seen as the most promising route towards realizing human-level AIBy utilizing patterns and concepts identified within our brains and applying engineering algorithms to expedite processes, we can distill the complicated operations of the brainA crucial component for this method is the advancement of brain-machine interface technology, enabling precise non-invasive scanning of the brain, bolstering the hybrid approach's feasibility.

The book postulates that to attain human-level intelligence, either the first or third approach is more pragmatic than solely relying on the engineering-focused second methodWhile deep neural networks representative of AI have yielded considerable success, the development of cognitive models has been relatively languid

alefox

Nonetheless, cognitive architectures could play a pivotal role in the engineering framework for human-level AIBy incorporating pre-constructed information processing structures in the form of cognitive architectures into engHLAI, and pairing these with adaptable neural networks capable of adapting to environments and challenges, we create a system with the potential to evolve, reminiscent of how the human genome’s blueprint allows for diverse individual personalities shaped by experiences.

Reflecting on the historical context of human flight provides an interesting analogyEarly attempts to mimic birds by attaching wing-like structures powered by human or bicycle means often ended in failureYet fast forward to today – we’ve created supersonic jets, capable of traversing continents in mere hours and rockets that can transport us to the moon, feats unattainable by any avian creature

In the realm of AI, similar skepticism posits that engineering methodologies may transcend evolutionary boundaries, offering a glimpse of the potential for achieving strong AI.

However, it is only through grasping the principles of aerodynamics regarding bird flight that humans first achieved controlled flightEngineering progress thus steadily surpassed avian flight capabilities over timeSuch a paradigm shift necessitates understanding how animal or human brains functionGrasping foundational principles akin to aerodynamics will empower us to utilize engineering methodologies to design systems that exceed the restrictions imposed by natural evolutionEngHLAI can bypass the slower ionic transmissions found in the brain, employing modern high-speed electronic technology to intertwine advanced computing with human-level AI (HLAI). The pursuit commences from a deep comprehension of the brain’s functional mechanisms, making the exploration from simplistic animal brains to the complex human counterpart potentially the swiftest route to realizing HLAI.

Once a system of human-level AI (HLAI) is established and the brain's functionalities thoroughly decoded, the future may see engineered intelligent systems (engHLAI) usurping complexities far beyond human capabilities

Future advancements may fulfill the role of developing subsequent high-intelligence systems, leading to a scenario where it's not about human-led creation of superior intelligences but rather a movement toward AI entities that transcend human achievements in intelligence.

At present, when neuroscience does not yet suffice for emulating human intelligence, we can explore two alternative strategiesOne route is to synthesize intelligence from simpler biological entities, such as small intelligent creatures with fewer neurons in their brainsThis approach seeks to identify fundamental principles that can be scaled up to model human-level AI, termed “animal level AI,” where “animal” excludes humans.

The second strategy involves crafting an environment driven by evolutionary algorithms that facilitate learningBy evolving optimal algorithms and embedding them within human-level AI systems, we initiate this process within virtual realms

In these settings, the fundamental principles governing intelligent systems evolve based on animal-level AI modelsOver time, virtual populations would undergo rapid evolution, guided by Darwinian principles of natural selection, enhancing their survival and reproductive mechanisms and ultimately elevating their intelligenceEventually, these refined evolutionary algorithms integrate into human-level AI, bolstering its learning and adaptive capabilities.

Echoing Cooper's guidance, Azoff advocates for first establishing simple animal-level intelligence before grappling with more complex issues, dramatically opposing today’s prevailing notion whereby large language models strive to simulate advanced human functions in isolation.

Our lineage traces back to the earliest life forms, making visual cognition the brain's initial process before language comprehensionWhether internal dialogue or conversing with others, both evolved from the foundation of visual cognitive skills

As William GEllen pointed out, "Over 50% of the brain cortex is dedicated to processing visual information." Understanding visual processing mechanisms could illuminate insights into the brain's holistic operationsThus, Azoff underscores the significance of utilizing vision-centric tasks as litmus tests for current human-level AI advancement, heralding a pathway toward cognitive breakthroughs.

More concretely, Azoff suggests a simplified scientific exploration methodology encapsulated in the Plan-Do-Check-Act (PDCA) cycle, detailing the following steps:

  • Plan: Design experimental schemes with clear objectives and methodologies.
  • Do: Implement the experiment or task as planned, maintaining orderly procedures.
  • Check: Monitor and analyze results, pausing as necessary to evaluate and identify potential challenges.
  • Act: Based on evaluations from the Check phase, assess results and take actions accordingly, whether it means deepening knowledge or revising strategies.

This methodology demonstrates several components crucial for achieving hybrid AI, including:

  • Movement and Perception: Equip systems with capabilities to navigate and perceive the real world, including reflexive self-protection in dynamic environments.
  • Causal Inference: Within bounds, deduce causal relationships and update probabilities as new information emerges, using Bayesian foundations.
  • Large Language Models: Employ cutting-edge technologies to facilitate human-machine communication.
  • Hyperdimensional Computing: Facilitate information storage within semantic spaces, ensuring continuous updating and interaction.
  • Preset Instructions: Define system motives and objectives; or reprogram to modify behaviors.
  • Neuroregulation: Augment the activity of certain neurons while inhibiting others, akin to neurotransmitter gradients in the brain, leading to forms of reinforcement learning.
  • Autonomous Behaviors: Essential mechanisms safeguard systems from harm, including control over reflexive motion systems, driving the clock, and managing the PDCA cycle.
  • Neuron Generation & Removal: Introduce or prune neurons in networks, echoing neuroplasticity that enhances adaptability and learning capabilities.
  • Non-volatile Memory, Storage, and Computation: Facilitate reading and writing processes akin to computer RAM for sustained information storage.
  • Internal Communication: Designate roles for the left and right system halves, focusing on immediate tasks versus long-term planning.

The book further posits several fundamental standards essential for realizing human-level AI:

  • Internal Model: The AI brain should harbor an internal model of the world, utilizing scientific methods to discern and learn its environment.
  • Cortical Division Structure: AI must mirror the brain's hemispheric specialization with internal dialogues aiding decision-making.
  • Internal Reward System: Analogous to dopamine’s role in reinforcement learning.
  • Diffuse Decision Mechanism: AI must enable collective responses, orchestrating decisions based on engendered inputs—acknowledging interactions across neuron populations.
  • Causal Reasoning: Facilitate understanding of event chains and interrelated causality.
  • Goal Orientation with Autonomous Intermediate Goals: Aim towards breaking down complex tasks into manageable steps.
  • Comprehension of Physical Mechanisms: Develop understanding grounded in scientific knowledge of the physical world.
  • Ethical Behavior: AI should integrate a moral compass to ensure ethical conduct.
  • Continuous Learning: Embed intrinsic motivation to accumulate knowledge and foster relationships.
  • Abstract Thinking: Enhance capacity to extract universal concepts from specific instances.

In pursuit of human-level intelligence, Azoff identifies three cognitive architectures capable of engineering brain-like simulations: Soar, ACT-R, and Adaptive Resonance Theory (ART). These top-down frameworks aim to encapsulate fundamental processes present in human cognitive functioning.

(1) Soar

Originating from Allen Newell's cognitive unified theory, Soar serves as a paradigm of unified cognitive architecture

Its design supports multiple micro-theories concentrated on specific cognitive facets, effectively integrating them into one cohesive system.

When striving for goals, Soar employs various methodologies: reasoning from external resources, utilizing procedural memory for logic, extracting solutions from various memory types, or directly interacting with the external environment.

Should Soar encounter a bottleneck in reaching goals, it autonomously generates a sub-goal, channeling focus back to ongoing tasks until it overcomes the hindranceThis cyclical framework persists until the ultimate goal is attained, integrating newly acquired knowledge into the overarching structure.

By employing goal means analysis, Soar narrows search spaces more effectivelyIt does this through a recursive process of selecting actions that minimize disparities between the present and target states

To facilitate this, the system must accurately detect status shifts and act accordingly.

As problems are tackled, gained memories intertwine with current situational knowledge in short-term memoryAll tasks convert into problem spaces, with long-term memory composed of “production systems” that address task requirementsThe knowledge-searching mechanism triggers all production, opting for the operator best aligned with queriesWhen faced with familiar challenges, Soar rapidly retrieves pertinent information from long-term memory, ensuring efficient problem resolution.

(2) ACT-R

This second cognitive framework, ACT-R — Adapted Cognition and Rationality, was developed by John Anderson and Christian Lebiere at Carnegie Mellon UniversityInspired by Soar, ACT-R incorporates seven key modules, notably vision, goals, retrieval, and motor control, functioning concurrently yet executing singular rules.

The central module of ACT-R orchestrates inter-module communications, accessing buffers to update other components using rules

Through vision and motor control systems, ACT-R interacts with environments, such as typing on a keyboard or perceiving screen images.

In ACT-R, learning transpires across multiple loci and modalities, with reiteration fortifying declarative memory being a core methodThe framework’s production rules are grounded in empirical data regarding memory, learning, and problem-solving.

Each instance of ACT-R’s operation targets a solitary goal, shifting states via triggered production rulesThese rules execute sequentially yet can nest and link to one another, yielding new outputs for subsequent actions.

The module responsible for matching recognizes the most “efficient” production rules, weighing not only their goal-achievement potential but also the associated execution costs — essentially conducting cost-benefit analysesCosts can manifest as time constraints or pressures related to the goal’s immediacy

Without external oversight, ACT-R continually fine-tunes rules and their cost-efficiency, selecting optimal strategies through every operational cycle.

(3) Adaptive Resonance Theory (ART)

Stephen Grossberg introduced the ART cognitive framework, rooted in understanding the brain’s unsupervised self-correction mechanismsA significant challenge facing many neural network models arises when presented with new training materials, resulting in “catastrophic forgetting.” The models need harmonization to assimilate new inputs without forsaking previously acquired knowledgeWhile contemporary language models partially address this through fine-tuning, "catastrophic forgetting" remains pervasive in AI, imposing limits on applicationsART addresses this challenge, enables swift and stable learning, and curbs catastrophic forgetting.

Using unsupervised algorithms, ART clusters input patterns and generalizes learning through an attentiveness parameter

A heightened awareness biases learning toward specific categories, whereas reduced attentiveness encourages more general, abstract classificationsWhen new input deviates from existing clusters, ART opts for a new classification path, assigning a fresh output neuron when necessary, sustaining adaptability amidst influxes of modification.

Embedding an attentiveness mechanism allows ART to maximize generalization efficacy while minimizing prediction errorsIt tracks discrepancies, incrementally raising awareness to rectify errors while sacrificing minimal generalized output to align newly established categoriesThis mechanism mirrors learning processes influenced by acetylcholine in the brain.

From an engineering perspective, ART organizes input data representations as vectorsA foundational layer comprises input neurons (Layer 1) and output neurons (Layer 2). Layer 1 engages in feature-detecting network activities for input patterns, while Layer 2 connects to Layer 1 with adaptive weights

Layer 1’s activity corresponds to the bottom-up input patterns, contrasting with top-down expectations from layer 2 outputsThe competitive network within Layer 2 endorses the strongest collective neuronal response, embracing a “winner-takes-all” principle, thus enhancing efficient category selection for current inputs.

In 2021, the framework’s proponent detailed these insights in "Conscious Mind Resonant Brain: How Each Brain Makes a Mind," further substantiated by prevailing psychological and neuroscientific evidence supporting ART’s hypothesesInterested readers are encouraged to explore this work for comprehensive insights.

As we delve further into human understanding of intelligence, gazing at the stars prompts reflection on transcending our boundariesAs Eitan Michael Azoff suggests, beginning at the simplest point by comprehending animal brain function may lead us to achieve animal level AI first