Logfire supports instrumenting calls to OpenAI with one extra line of code, here's an example of instrumenting
the OpenAI SDK:
importopenaiimportlogfireclient=openai.Client()logfire.configure()logfire.instrument_openai(client)response=client.chat.completions.create(model='gpt-4',messages=[{'role':'system','content':'You are a helpful assistant.'},{'role':'user','content':'Please write me a limerick about Python logging.'},],)print(response.choices[0].message)
All methods are covered with both openai.Client and openai.AsyncClient.
For example, here's instrumentation of an image generation call:
importopenaiimportlogfireasyncdefmain():client=openai.AsyncClient()logfire.configure()logfire.instrument_openai(client)response=awaitclient.images.generate(prompt='Image of R2D2 running through a desert in the style of cyberpunk.',model='dall-e-3',)url=response.data[0].urlimportwebbrowserwebbrowser.open(url)if__name__=='__main__':importasyncioasyncio.run(main())
When instrumenting streaming responses, Logfire creates two spans — one around the initial request and one
around the streamed response.
Here we also use Rich's Live and Markdown types to render the response in the terminal in real-time.
importopenaiimportlogfirefromrich.consoleimportConsolefromrich.liveimportLivefromrich.markdownimportMarkdownclient=openai.AsyncClient()logfire.configure()logfire.instrument_openai(client)asyncdefmain():console=Console()withlogfire.span('Asking OpenAI to write some code'):response=awaitclient.chat.completions.create(model='gpt-4',messages=[{'role':'system','content':'Reply in markdown one.'},{'role':'user','content':'Write Python to show a tree of files 🤞.'},],stream=True)content=''withLive('',refresh_per_second=15,console=console)aslive:asyncforchunkinresponse:ifchunk.choices[0].delta.contentisnotNone:content+=chunk.choices[0].delta.contentlive.update(Markdown(content))if__name__=='__main__':importasyncioasyncio.run(main())
We also support instrumenting the OpenAI "agents" framework.
importlogfirefromagentsimportAgent,Runnerlogfire.configure()logfire.instrument_openai_agents()agent=Agent(name="Assistant",instructions="You are a helpful assistant")result=Runner.run_sync(agent,"Write a haiku about recursion in programming.")print(result.final_output)
In this example we add a function tool to the agents:
fromtyping_extensionsimportTypedDictimportlogfirefromhttpximportAsyncClientfromagentsimportRunContextWrapper,Agent,function_tool,Runnerlogfire.configure()logfire.instrument_openai_agents()classLocation(TypedDict):lat:floatlong:float@function_toolasyncdeffetch_weather(ctx:RunContextWrapper[AsyncClient],location:Location)->str:"""Fetch the weather for a given location. Args: ctx: Run context object. location: The location to fetch the weather for. """r=awaitctx.context.get('https://httpbin.org/get',params=location)return'sunny'ifr.status_code==200else'rainy'agent=Agent(name='weather agent',tools=[fetch_weather])asyncdefmain():asyncwithAsyncClient()asclient:logfire.instrument_httpx(client)result=awaitRunner.run(agent,'Get the weather at lat=51 lng=0.2',context=client)print(result.final_output)if__name__=='__main__':importasyncioasyncio.run(main())
We see spans from within the function call nested within the agent spans: