MIT professor and early AI pioneer J.C.R. Licklider, published his vision for the future in a 1960 seminal article entitled “Man-Computer Symbiosis” in which he said the following:

“In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking.”

In the modern world, these machines are referred to as AI assistants. The development of this technology is a time-consuming and complex process that requires a sophisticated understanding of psychology and computer programming in addition to the efforts necessary to collect, clean, and annotate substantial amounts of data to train them. Therefore, it becomes extremely desirable to reuse parts if not the entirety of an existing AI assistant across different domains and applications.


Table of Contents:

1. Teaching a Machine Human Skills

2. Acquiring Soft Skills

3. Cognitive, Multi-Level, and Model-Based AI

4. Reuse of pre-configured functional AI units

5. Reuse of pre-configured whole AI solutions

6. How No-Code Platforms Create Universal Accessibility

7. The Future of Reusable, No-Code AI


Teaching a Machine Human Skills

Training the assistants has proven difficult due to the fact that AI must demonstrate the capability of particular human skills to aid and collaborate with people in meaningful tasks like career guidance or determining the proper healthcare treatment for unwanted symptoms.

To assist human beings on a realistic level, the foremost skill these assistants require includes a powerful command of language to interact and interpret input in addition to responding to any given requests in a natural language.

Human expressions are as diverse as they are complicated. Picture an application via which a chatbot is interviewing a potential job candidate with open-ended queries such as:

“What is the biggest challenge you are facing as your current job?”

This makes it too easy for the candidates to respond with a near unbounded answer.

Next, candidates may divert from the initial topic of discussion by providing responses that are irrelevant or for the purposes of clarification.

This assistant must then handle and recognize the responses the way a human would in order to continue in a natural fashion.


Acquiring Soft Skills

An even greater challenge posed by attempting to teach an AI to behave like a human includes the development of soft skills.

Even though people are intuitively able to ascertain incomplete or ambiguous signals from one another, AI still struggles immensely to interpret these nuances.

These hurdles are hard for three reasons. First and foremost, it typically requires the expertise of a software developer or a psychologist to determine what algorithms or methods are necessary to implement the methods necessary to train.

Below is a list of just several of the items encompassing the multitude of elements that factor into basic training:

  • Natural language understanding (NLU) — this can include neural or symbiotic data-driven approaches.
  • Supervised or unsupervised machine learning.
  • Sufficient data

Regardless of these, code must be written to collect and implement the data to train and connect the various models.

To successfully conduct an interview, for instance, tens of thousands of responses for every single open-ended question are utilized to train assistants to implement them as part of the conversation.

Training AI from scratch is extremely time-consuming and typically an iterative process. If the models fail to perform well, the entire procedure must be repeated until each of them is considered acceptable.

Unfortunately, the vast majority of organizations still do not possess enough in-house expertise on this growing trend, nor do they have the necessary training data needed to feed the AI. This makes adopting a viable solution extremely difficult. If the digital divide weren’t enough of an issue, imagine the potential for the future with an AI divide.

To establish a democracy for the adoption of this magical technology, a proposed solution includes pre-training models that allow for either rapid customization or direct use to suit a wide array of applications. Instead of constructing new models from the ground, it would be far more efficient to piece them together from existing configurations in a similar manner to how cars are assembled from a bucket of components like the brakes, wheels, and engine.


Cognitive, Multi-Level, and Model-Based AI

Photo of man by Pixabay

A cognitive, model-based architecture possessing three or more layers of components that expand capabilities from each level can be pre-built or pre-trained and subsequently customized to support many applications.

Depending on the models’ purposes and training regimens, they all commonly fall under two categories: general purpose and special purpose. General-purpose can be seen observed in conversational agents, while special-purpose can be observed from a physical robot.

Several of the most popular and well-known data-driven, general-purpose models include GPT-3 and BERT, technologies that are trained via large swaths of publicly available data. These can be reused to process language expressions. Conversely, symbolic models including finite state machines can be configured as parsers to extract and identify more precise fragments of information.


Reuse of pre-configured AI engines and models

Unfortunately, general-purpose models are often seen as inadequate to handle specific applications for a number of reasons. Because such models are trained with general data, they may be unable to decipher information that is unique to a domain.

Moreover, they fail to support specific tasks including the inference of a user’s desires and needs from a conversation nor the ability to manage it. In particular, active listening engines allow an AI assist to interpret input including ambiguous and incomplete expressions. This also allows the assistant to deal with arbitrary interruptions and maintain the context of a conversation to complete a task.

Though these engines allow for meaningful interaction, personal insights inference engines power the greater understanding of the users and a more customized engagement. Personal wellness or personal learning assistants can encourage users to remain on their treatment or learnings courses based on the traits that make them tick. Combining big data analytics and Item Response Theory (IRT), this engine is capable of pre-training on information that manifests the connection between a person’s characteristics and their communication patterns. It can subsequently be reused to develop insights into any future conversations that are conducted via natural language.

Lastly, to properly interpret the expressions of a user in the way humans go, conversation-specific language engines assist AI during conversations. Sentiment analysis engines are implemented for the automatic detection of an expression in tandem with a question detection system that identifies whether the input is a request or question that needs a response from the assistant.


Reuse of pre-configured functional AI units

Despite the fact that specific and general AI models and engines can provide the assistant with a foundation of intelligence, a complete solution is necessary to produce specific services or accomplish particular tasks. When the interviewer converses with users on a topic, its objective is to extract information from the user for the purposes of assessing their fitness for a job role.

Various functional units are necessary to support this manner of behavior. Cognitive AI must be able to complete an exchange. For instance, topic-specific AI units are individually enabled to converse with users about a particular subject. Consequently, the library of conversation includes a number of units that support a particular task.

Using architecture that is model-based, the function AI units can be pre-configured for direct reuse and can be extended to incorporate new actions based on sets of new conditions.


Reuse of pre-configured whole AI solutions

The highest layer of cognitive AI is a set of solution templates that are end-to-end. In the context of developing assistants, these templates pre-determine the flow of tasks to be performed in addition to the pertinent base of knowledge to support functions needed during a given interaction.

A template for a job interview, for instance, would require a set of questions as well as a knowledge base to answer frequently asked inquiries regarding the job itself. In a similar way, a template-based around personal wellness could potentially outline tasks necessary for the assistant to [perform like delivering reminders or checking status.


How No-Code Platforms Create Universal Accessibility

Not only do reusable systems or components save effort and time when developing a solution, but they also enable no-code redesign of various components. Not having to go underneath the hull and program accelerates the time to take newly customized AI to market. See below for several examples of use cases:

Imagine a recruiter that desires to use an AI for the purposes of interviewing candidates for a job at their organization. They can implement an existing template but alter the interview questions and responses to build a customized version that suits the purposes of their business. This greatly simplifies the construction of an end-to-end solution for individuals that typically lack the deeper technical skills necessary to create a new model from scratch.


The Future of Reusable, No-Code AI

As this capability matures, the adoption of AI as a universal requirement in any Information Technology organization will grow at exponential rates, let alone for individual users, hobbyists, and enthusiasts. To accomplish said paradigm, key advances are necessary for several areas.

The first is to create platforms that allow reusable systems and components to be understood by non-technical individuals.

The second is to support the reusable platforms through the enablement of automatic debugging. As the solutions increase in sophistication and complexity, it becomes more difficult to manually examine the potential behavior under a rapidly growing number of circumstances. Despite the fact that there is initial research on profiling a given assistant, much more research is necessary going forward.

The third is to ensure that AI as a generally accessible technology is used responsibly and ethically. If an individual can simply reuse a functional unit to illegally steal sensitive data from users, who will be the individual protecting them? New usage guidelines and others measures will be required to ensure the creation, deployment, and ongoing maintenance of safe and trustworthy solutions.