What is Tool Calling ?
Tool calling is the process by which an LLM can use AI Tools
When you ask an LLM a question, it generates an answer. The answer is generated on the GPUs hosted by the model provider, like OpenAI or Anthropic. So how is it able to call an AI Tool when that tool is on your laptop or on your MCP Server and the AI is in the cloud ?
Well it does this through a process called Tool Calling. When you ask it a question, in the background you are also giving it instructions on all the tools it can use. When it wants to use a tool, like the add tool mentioned in the AI Tools post, it asks us to run the tool for it and return the results.
It does this by giving us a "special message" with details on which function it wants to call, and what value to give it.
For example, if we asked it "What is 2+3", it would send us a special message that looks like this:
{
"type": "function_call",
"name": "add",
"arguments": "{\"x\":2, \"y\": 3}"
}We would then run the add function with 2 and 3 as values, and give the response - 5 - back to the AI. It would then generate the final response "2+3 is equal to 5" and return that to us.
All of this happens in the background. We don't even see it, because our local chat will process it and only show us the "human output" from the AI. The function calls, we don't need to know about, so it doesn't show it to us. We can see it if we want to, but it will usually hide it away to keep things simple.
This turn based back and forth is the process of tool calling.
Tool calling is the process by which an LLM can use AI Tools
When done over the internet, the tools are called using MCP.