- Published on
AI-Powered Lunch Planning - Bringing Your Canteen Menu into Claude with MCP
Image credit: Lexie Barnhorn from Unsplash
"What's for lunch today, Claude?"
With that simple question, I can now see exactly what's cooking in our company canteen without opening a single browser tab or scrolling through Confluence. My AI assistant handles it all. This isn't just another "AI showcase" project - it's a practical solution to a daily annoyance that actually saves time. In this post, I'll walk through how I built this system using Anthropic's Model Context Protocol (MCP), turning a static image on Confluence into a queryable resource inside Claude.
The Solution Architecture
Here's what we'll build today:
┌────────────────┐ ┌────────────────┐ ┌────────────────┐
│ │ │ │ │ │
│ Confluence │───>│ Gemini 2.0 │───>│ Database │
│ (Menu Image) │ │ (OCR & Parse) │ │ (Menu Data) │
│ │ │ │ │ │
└────────────────┘ └────────────────┘ └────────────────┘
│
▼
┌────────────────┐
│ │
│ MCP Server │
│ │
└────────────────┘
│
▼
┌────────────────┐
│ │
│ Claude │
│ │
└────────────────┘
Let's dive into how I built it step by step...
Getting started
What is MCP?
Model Context Protocol is a standard developed by Anthropic to make it easy to integrate external software into AI software. In the past, your AI assistants mostly operated in isolation from the "real world". Claude can write you a query to find the top users, but you have to go back to your MongoDB to actually run it.
Now, you can for example connect your MongoDB database to Claude and ask questions like
[...] "show the schema of the 'users' collection" or "find the most active users in the collection."
and using MCP, Claude can get data from your MongoDB database to answer that question. Fully automated inside Claude, without having to jump manually into MongoDB to run the query Claude suggested.
It's basically like USB-C for AI, a single universal plug to fit external integrations into AI instead of having to write custom integrations for each tool/service. So my plan was to write a MCP server for this use-case. It's easy to setup and I can easily share it my my colleages so they can check the menu in their preferred AI tool.
Making a plan
Luckily the canteen publishes the menu on Confluence. Sadly it's a PNG image, but that wouldn't stop me. My plan was:
- Get the menu image from Confluence
- Run Gemini 2.0 Flash on the image to extract the menu and save it in a database
- Run a MCP server that exposes this menu as a resource
Step 1: Getting the data from Confluence
A new MCP server for Confluence actually just came out, but I decided to go oldschool and use the HTTP API.
The API is surprisingly simple.
To get the attachments of a page, you need to do a GET on
${this.baseUrl}/rest/api/content/${this.pageId}/child/attachment
The reponse in my case looks something like this.
// some properties omitted for brevity
{
"results": [
{
"id": "423432432434",
"type": "attachment",
"status": "current",
"title": "menu.jpg",
"_links": {
"download": "/download/attachments/423432432434/menu.jpg?version=3&modificationDate=423424234234&api=v2",
}
}
],
}
}
Great, we can download the image.
Step 2: Getting machine-readable data from the menu.
So after getting the menu as an image, we needed to turn this into readable text to store it in the database. Gemini 2.0 Flash is ideally suited for this case as it supports image input and tool calling for structured output. It's dirt-cheap as well, allowing you to convert 6000 PDF pages to markdown for just $1.
The basic prompt was This is a weekly lunch menu. Extract meals for each day with their categories. Return the structured data using the provided function
.
The easiest way I saw for this was to use tool calling and provide a function called process_menu
that Gemini would call for every day of the week with the meals that are offered on that specific day.
type: 'function',
function: {
name: 'process_menu',
description: 'Process the weekly menu data',
parameters: {
type: 'object',
properties: {
weekday: {
type: 'string',
description: 'The weekday for these meals',
enum: ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']
},
meals: {
type: 'array',
items: {
type: 'string',
description: 'Name of the meal'
},
description: 'List of meals available for this weekday'
}
},
required: ['weekday', 'meals']
Using this as an example input.
Dish Type | Monday | Tuesday | Wednesday | Thursday | Friday |
---|---|---|---|---|---|
Main Course | Herb-Roasted Chicken with Root Vegetables | Beef Stroganoff with Buttered Noodles | Mediterranean Fish Stew | Classic Meatloaf with Caramelized Onion Gravy | Lemon Garlic Shrimp Pasta |
Vegetarian Option | Stuffed Bell Peppers with Quinoa | Butternut Squash Risotto | Spinach and Feta Spanakopita | Eggplant Parmesan | Wild Mushroom Wellington |
Light Option | Grilled Chicken Caesar Salad | Tomato Bisque with Grilled Cheese | Niçoise Salad | Vietnamese Rice Paper Rolls | Greek Salad with Pita Bread |
International Special | Mexican Enchiladas | Thai Green Curry | Moroccan Tagine | Indian Butter Chicken | Spanish Seafood Paella |
Dessert | Apple Crumble with Vanilla Custard | Chocolate Brownie with Ice Cream | Lemon Meringue Pie | Fresh Fruit Trifle | Tiramisu |
We get 5 tool calls for the 5 days, the first one being:
{
"index": 0,
"id": "tool_0_process_menu",
"type": "function",
"function": {
"name": "process_menu",
"arguments": "{\"weekday\":\"Monday\",\"meals\":[\"Herb-Roasted Chicken with Root Vegetables\",\"Stuffed Bell Peppers with Quinoa\",\"Grilled Chicken Caesar Salad\",\"Mexican Enchiladas\",\"Apple Crumble with Vanilla Custard\"]}"
}
}
It correctly identified the weekdays and all meals offered on that day. With aound 1600 input tokens for a 300kb image and prompt and 150 output tokens the bill comes up to $0,000221.
Looping through the tool calls, we can extract the meals for each weekday and store them in the database.
for (const call of toolCalls) {
if (!call.function?.arguments) continue;
const { weekday, meals } = JSON.parse(call.function.arguments);
const dailyMenu: Menu = {
date: weekday,
meals: meals.map((mealName: string) => ({
name: mealName,
})),
};
}
To expose them, we simply add a JSON REST API that gets the meals from the database.
const targetDate = startOfDay(new Date(dateParam));
const menu = await prisma.dailyMenu.findUnique({
where: { date: targetDate }
});
Step 3: Creating a MCP server
The last step of exposing our menu to AI assistants is to write a MCP server.
The FastMCP framework makes it easy to do this in Typescript. First, you need to give your server a name.
const server = new FastMCP({
name: "My Server",
version: "1.0.0",
});
Then, you can expose tools to the user of the server. You can specify input parameters and the function to generate the response.
In this case, we want to expose a tool named get_lunch_menu
that fetches the menu for a specific day. By specifying the date as a parameter date: z.string().date()
, our AI tool (like Claude) will know how to call the tool to get the menu for the day.
server.addTool({
annotations: {
openWorldHint: true, // This tool interacts with external API
readOnlyHint: true, // This tool doesn't modify anything
title: "Lunch Menu",
},
description: "Get the lunch menu from the canteen for a specific date",
execute: async (args) => {
try {
const response = await axios.get(process.env.API_URL!, {
params: { date: args.date }
});
return JSON.stringify(response.data, null, 2);
} catch (error) {
if (axios.isAxiosError(error)) {
throw new Error(`Failed to fetch menu: ${error.message}`);
}
throw error;
}
},
name: "get_lunch_menu",
parameters: z.object({
date: z.string().date(),
}),
});
You could now distribute this MCP tool as a npm package and allow users to install it locally. Originally stdio on the client was the main integration pattern. But it's much easier to expose it as a server. Server-side events were added for remote MCP servers. HTTP Streaming is the new way to go, but not all clients fully support it, so I decided to go with server-sent events.
server.start({
transportType: "sse",
sse: {
endpoint: "/sse",
port: 8080,
},
});
Deployment notes
Once you deploy this on your server, clients can connect to $server:8080/stream to discover available tools and call them. I used Coolify to deploy this to my Hetzner machine. Initially it didn't work, as Traefic uses gzip to compress HTTP and AI clients couldn't handle this. You need to disable any sort of compression to make it work.
Step 4: Using this MCP in your AI client
Now the only thing left to do is to add this MCP server to your AI client config. For Windsurf and Claude you need to go to the settings and add the URL in the JSON file:
{
"mcpServers": {
"food": {
"command": "npx",
"args": [
"mcp-remote",
"https://yoururl.com:8080/sse",
"--transport sse-only"
]
}
}
}
And now you can ask Claude for food.