When most people hear about AI today, they picture a black box that does everything, from chatting with you to helping you ship an entire project because you are the type to draft a 20-page plan before even opening your IDE.
That misunderstanding gets worse as AI becomes a buzzword for anything that sounds clever.
I am not the type to accept things as they are. If you ask me about large language models, I want to know what made them possible. The same goes for neural networks. This curiosity leads to a simple question: what did AI look like when it first started?
To really understand this, I decided to build a tiny rule-based car diagnostician to see how the first systems thought. We’ll look at the rules, the reasoning, and write some Python to make the AI ask questions and draw conclusions, just like in the 1970s.
Where Intelligence Was First Imagined
It all started with a question that feels obvious once you hear it: given a set of knowledge and rules, how can we automate decision making?
The central belief was straightforward. If you could write the right rules and store enough facts, you could build an intelligent system.
This early AI tried to emulate human expertise. It represented knowledge with explicit symbols, facts, and rules, not statistical patterns. The goal was to mimic an expert’s decision-making within a narrow domain. In my toy example, that domain is car diagnostics.
These systems work best where problems can be defined clearly and rules encoded precisely. To see how this worked, let’s break down the components that powered these early expert systems.
Building Blocks of Symbolic AI
This approach went by many names: Symbolic AI, Good Old-Fashioned AI (GOFAI), or Expert Systems. The focus was on logic and deduction, not probabilities and predictions.
If we break the system down, the main pieces are clear:
- A Knowledge Base that stores facts and rules collected from experts.
- An Inference Engine that applies logic to draw conclusions.
- An I/O Interface so a human can interact with the system.
The system separates what it knows from how it reasons. This makes it easy to add or fix rules and to see exactly why the system reached a conclusion. That kind of transparency is rare in modern models, where you often have no idea how an answer was produced.
Building a Tiny Car Diagnoser
Now that the parts are clear, let us see how they fit together.
Knowledge base (rules)
This contains the expert knowledge, typically stored where the inference engine can read it. For example:
rules = [
{'if': ['no_cricket_sound', 'headlights_dim'], 'then': ['battery_dead']},
{'if': ['battery_dead'], 'then': ['issue_battery']},
{'if': ['cricket_sound', 'slow_cranking'], 'then': ['starter_faulty']},
{'if': ['starter_faulty'], 'then': ['issue_starter']},
{'if': ['engine_turns_over', 'no_start', 'no_fuel_smell'], 'then': ['fuel_pump_issue']},
{'if': ['fuel_pump_issue'], 'then': ['issue_fuel_system']},
{'if': ['engine_turns_over', 'no_start', 'spark_plug_wet'], 'then': ['ignition_issue']},
{'if': ['ignition_issue'], 'then': ['issue_ignition']},
{'if': ['no_lights_no_sound'], 'then': ['battery_dead']},
]
Rules are simple mappings from conditions to conclusions. If the car has no lights and no sound, a dead battery is a reasonable conclusion.
I/O interface
This layer lets the system talk to the user. The simplest interface is a question-and-answer loop.
def ask_user(fact):
while True:
response = input(f"Do you observe: {fact}? (yes/no): ") # e.g. cricket sound
if response == 'yes':
return True
elif response == 'no':
return False
else:
print("Please answer 'yes' or 'no'.")
A command line is perfectly adequate here. The rules drive the conversation.
Inference engine
The rules do not execute themselves. The inference engine applies rules to known facts to derive conclusions. The two classic reasoning styles are forward and backward chaining.
Forward chaining (data → conclusions)
Forward chaining is data-driven. It starts with a few known facts and works forward to see what can be concluded. The algorithm iterates through the rules, adding new facts as it goes, until no more conclusions can be drawn.
def forward_chaining(known_facts):
facts = set(known_facts)
new_fact_found = True
while new_fact_found:
new_fact_found = False # assume no new facts this round
# Go through each rule in the knowledge base
for rule in rules:
conditions = rule['if']
conclusions = rule['then']
# Check if all conditions are already known
if all(condition in facts for condition in conditions):
# Add conclusions if not already known
for conclusion in conclusions:
if conclusion not in facts:
facts.add(conclusion)
new_fact_found = True # we learned something new!
return facts
Imagine you tell the system the headlights are dim and there is no cricket sound. Forward chaining takes these facts, finds a rule that matches, and adds battery_dead to its list of known facts. It then finds another rule that uses battery_dead and concludes issue_battery. It continues until every possible conclusion is found.
Forward chaining is useful when you want to discover all outcomes from an initial set of facts.
Backward chaining (goal → evidence)
Backward chaining is goal-driven. It starts with a hypothesis and works backward to find evidence that supports it.
def backward_chaining(goal, known_facts):
# If already known true
if goal in known_facts:
return True
# If symptom, ask user
if goal in symptoms:
if ask_user(goal):
known_facts.add(goal)
return True
else:
return False
# Find rules that conclude the goal
supporting_rules = [r for r in rules if goal in r['then']]
for rule in supporting_rules:
# Check if all conditions of the rule are supported
if all(backward_chaining(cond, known_facts) for cond in rule['if']):
known_facts.add(goal)
return True
return False
Suppose the AI suspects the starter is faulty. To prove this, it finds a rule that concludes starter_faulty. It then treats the conditions of that rule, like cricket_sound and slow_cranking, as new sub-goals. If these facts are not already known, the system asks the user. It only concludes the starter is faulty if all the supporting evidence is confirmed.
This approach is very efficient because it only asks relevant questions to confirm or deny a specific hypothesis.
When Intelligence Hit a Wall
Symbolic AI had obvious limits. If you have to write a rule for every possible situation, the system becomes impossibly complex and brittle. What happens when an unforeseen symptom appears? The system breaks because there is no rule for it.
This inability to handle uncertainty, and the difficulty of capturing all human knowledge in explicit rules, contributed to the first AI winter in the 1970s and 1980s. Expectations were high, computing power was limited, and hand-coded rules could not scale. Funding and enthusiasm faded.
Still, the era was valuable. Researchers learned that manually programming intelligence does not scale. These lessons helped steer the field toward approaches that let machines learn from data instead of relying on handcrafted rules.
Exploring GOFAI might feel quaint next to today’s large language models, but it reminds us that modern AI is built on decades of trial, error, and clever thinking. Understanding these roots shows why transparency and reasoning still matter. It also makes the trade-offs of modern AI, between scale and interpretability, much clearer.