When rolling your own agent, tool calling comes with annoying plumbing (packaging up arguments from the LLM, unpacking return values, etc). People like frameworks in part because this plumbing is dealt with for you. But if you could solve this, it’d be fairly straightforward to roll a generic agentic loop that just interacted with your arbitrary python functions “directly” without the tool management headaches.
It turns out, with a few dozen lines of Python introspection and metaprogramming, you can make life quite a bit easier for yourself.
Let me explain.
Recap, agentic tool calling 101
An LLM, by itself, would struggle to answer the prompt:
Which location in the US currently has the highest temperature?
inputs = [{
"role": "system",
"content": "What city has the highest temperature right now?"
}]
resp = openai.responses.create(
model="gpt-5",
input=inputs
)
You’ll get a response:
I don’t have live access to real‑time weather data, so I can’t tell you the current hottest city right now.
Instead, the LLM asks YOU the question. (In Soviet LLM, LLM asks YOU question!) **They ask you in the form of a tool call.
So, maybe you have, at your beck and call, a function like the following:
class Weather(BaseModel):
temperature_c: float
precipitation_inches: float
wind_speed_mph: float
def get_current_weather(location: Literal["New York", "San Francisco", "Los Angeles"]) -> Weather:
"""Get the current weather in a handful of locations"""
if location == "New York":
return Weather(temperature_c=30,
precipitation_inches=0.0,
wind_speed_mph=8.0)
elif location == "San Francisco":
return Weather(temperature_c=25,
precipitation_inches=3.0,
wind_speed_mph=12.0)
elif location == "Los Angeles":
return Weather(temperature_c=31,
precipitation_inches=0.0,
wind_speed_mph=14.0)
else:
raise ValueError(f"Unknown location: {location}")
The agent needs to know that:
(a) It has a tool it can call to get the weather
(b) What arguments that tool takes
(c) What return value to get back
So you add a tools
parameter to your call (as per the spec here):
inputs = [{
"role": "system",
"content": "What city has the highest temperature right now?"
}]
# Put this together
tool_spec = {'type': 'function',
'name': 'get_current_weather',
'description': 'Get the current weather in a handful of locations',
'parameters': {'properties': {'location': {'enum': ['New York',
'San Francisco',
'Los Angeles'],
'title': 'Location',
'type': 'string'}
'required': ['location'],
'title': 'WeatherArgs',
'type': 'object'}}
resp = openai.responses.create(
model="gpt-5",
input=inputs,
tools=[tool_spec]
)
Within resp.output
list, you’ll then have some instances that look like this:
ResponseFunctionToolCall(arguments='{"location":"Los Angeles"}',
call_id='call_y4j7TqZsOP4tmywyZD7e2Oxc',
name='get_current_weather',
type='function_call',
id='fc_0117f13d2526e2140068eff91466188190888f2303dbe071a0',
status='completed')
In other words, a request that you
(a) Call the tool (with the params)
(b) Package up the response
(c) Call openai again with the results
And that’s the agentic loop. Call, fulfill tools, continue, until there are no more tool requests to fulfill.
inputs += resp.output
item = resp.output[-1]
weather = get_current_weather(location=item.arguments['location'])
json_resp = weather.model_dump_json()
inputs.append({
"type": "function_call_output",
"call_id": item.call_id,
"output": json_resp,
})
# Call the LLM again
resp = openai.responses.create(
model="gpt-5",
input=inputs,
tools=tool_spec
)
# Run tools again...
# Repeat..
Convenience function for tool calling
People reach for frameworks - or even an MCP server! - because said framework makes it easy to package up a function and treat it as a tool without any plumbing. You can just write that code yourself. Then have a generic, agentic loop that delegates tool calls to any python function you pass.
And that’s what I’ve done in this code snippet, that I want to demonstrate here:
Assuming your python function has decent type annotations you could just do this:
ArgsModel, tool_spec, call_from_tool = make_tool_adapter(get_current_weather)
The code for make_tool_adapter
iterates function arguments, grabs their type annotations, and builds a pydantic model that wraps them. Simultaneously, it also gives you a tool spec for them.
Finally, it gives you a function call_from_tool
that wraps your function get_current_weather
. This wrapper function unpacks arguments from ArgsModel
, calls your function get_current_weather
, and responds with the correct return value.
But it also does a few other convenient things:
- The tool’s description is the doc string
- The tool’s name is the function name
What does this look like in practice?
Assuming a helper, mapping a dictionary of function names → tools (the 3-tuple of arguments, tool spec, and tool wrapper function);
def create_tools(funcs):
tools = {}
for func in funcs:
ArgsModel, tool_spec, call_from_tool = make_tool_adapter(func)
tools[func.__name__] = (ArgsModel, tool_spec, call_from_tool)
return tools
Then when processing any outputs from an LLM, we can simply loop as follows:
def agentic_loop(model, system_prompt, funcs):
"""Agentic loop for any task."""
tools = create_tools(funcs)
inputs = [{
"role": "system",
"content": system_prompt
}]
tool_calls_found = True
while tool_calls_found:
# Repeat while tool calls exist:
resp = openai.responses.create(
model=model,
input=inputs,
tools=[tool[1] for _, tool in tools.items()] # The tool specs
)
inputs += resp.output
tool_calls_found = False
# (Paralellize if you want)
for item in resp.output:
if item.type == "function_call":
tool_calls_found = True
tool_name = item.name
tool = tools[tool_name]
# Translate arguments -> pydantic type
ToolArgsModel = tool[0]
# The wrapper function that accepts `ToolArgsModel`
tool_fn = tool[2]
fn_args: ToolArgsModel = ToolArgsModel.model_validate_json(item.arguments)
py_resp, json_resp = tool_fn(fn_args)
inputs.append({
"type": "function_call_output",
"call_id": item.call_id,
"output": json_resp,
})
return resp
print(agentic_loop([get_current_weather]))
Now you have a general purpose agentic loop that will work for experimenting with any use case. I won’t pretend it has the scalability constraints you need, but it’s been an invaluable tool for experimentation.
If you want to use the make_tool_adapter
helper, just grab pydantize.py
from my course code and go to town.
Enjoy softwaredoug in training course form!
