What are Agent and Environment in AI
An AI Machine has an agent and its environment. the working of agents is to perform tasks in their environment. an environment can have more agents.
An agent can be anything that could observe its environment using sensors and will act upon that environment using effectors.An Agent Definition in Artificial Intelligence
- Human beings have sensors like ears, eyes, nose, tongue and skin and effectors will be hands, legs, mouth which acts upon that environment.
- A robot will have cameras and infrared range finders which will work as sensors and motors and actuators will act upon the environment as effectors.
- Agent Performance – it specifies how successful an agent is.
- Behavior – Any action that is performed/completed after getting a sequence of percepts.
- Percept – perceptual inputs of an Agent
- Agent Function − mapping from the precept sequence to an action.
Types of AI Agents
- Simple Reflex Agent
- Model-based reflex agent
- Goal-based agents
- Utility-based agent
- Learning agent
1. Simple Reflex agent:
- Simple reflex agents are known as simplest agents. it performs an action on present percept and it overlooks the percept history.
- It follows Condition-action rule that means the agent will map the current state to action.
- its environment is fully observable.
Condition-Action Rule –
Problems with simple reflex agent –
- limited intelligence.
- No knowledge of non-perceptual parts of the current state.
2. Model-based reflex agent
- It works in a partially observable environment and prefers to track the state.
- It has two important parts:
- Model: it means “how things happen in the world,” so it is known as a Model-based agent.
- Internal State: Representation of the current state depending on percept history.
- Updating the state needs the information about −
- How the world evolves.
- How the agent’s action affects the world.
3. Goal-based agents
- Goal-Based Agents select their actions to get goals.
- It is more flexible.
- The current state environment knowledge is not sometimes enough in order to make a decision for an agent to what to do.
- it uses searching and planning scenarios that make an agent proactive.
4. Utility-based agents –
- Utility-Based Agents select their actions based on preference (utility) for each state.
- Goals are incompetent when −
- If conflicting goals are available and only a few can be performed.
5. Learning Agents –
- It learns from its past experiences. we can say that A learning agent has the capability to learn.
- The agent begins acting with basic knowledge and it will act and adapt via learning.
- It has four conceptual components they are:
- Learning element: it makes improvements by learning capability from the environment.
- Critic: Responsible for taking feedback from critic which explains how well the agent is doing with respect to a fixed performance standard.
- Performance element: it chooses an external action.
- Problem Generator: This component gives suggestion actions that could guide to new and informative experiences.