AI Comparison 1: ChatGPT 5.0 Pro vs. Claude Sonnet 4 on IPhone App Design
- Ashley Tan
- 8 hours ago
- 3 min read
Today I ask the 2 AI tools to help me create an Iphone App that can calculate and track calories, carb, fat, etc based on pictures taken on each meal and provide recommendation based on dietary goals
When I asked that question, ChatGPT gives me the
high-level of the process
Snap/choose a photo → detect likely foods → confirm + set portion → auto-calculate calories, carbs, fat, protein → log → daily/weekly insights and goal-based tips.
On-device ML for speed/privacy; optional cloud lookup for nutrition facts.
Goal engine: daily macro targets; flag over/under and suggest swaps.
architecture
[Camera/Photos] → [Food Detector (Core ML + Vision)]
↓ candidates
[User confirm + portion size]
↓ item, grams
[Nutrition Lookup (USDA/Edamam/Nutritionix)]
↓ macros
[Store (CoreData/SQLite/CloudKit)]
↓
[Trends (Swift Charts) + Recommendations]
tools
UI: SwiftUI + MVVM, PhotosPicker + Camera.
ML: Start with a food classifier (e.g., Food-101-style) and a basic portion slider. Later, upgrade to on-device segmentation or ARKit depth for portion estimation.
Nutrition data: Map confirmed food → canonical name → lookup via API (e.g., USDA FoodData Central). Cache results locally.
Data Model
FoodEntry: id, date, photoThumbPath, items:[FoodItem], totals (kcal, carbs_g, fat_g, protein_g)
FoodItem: name, grams, kcal, carbs_g, fat_g, protein_g, confidence
You can back this with Core Data or a lightweight SQLite (shown below with in-memory for brevity).
And a list of sample swift code
And recommendations on how to extend and next steps
When I asked the same exact question, Claude broke down the ask into Core Features:
1. Image Recognition & Analysis
Food identification from photos
Portion size estimation
Nutritional data extraction
2. Nutrition Tracking
Calories, carbs, fats, protein, fiber, etc.
Daily/weekly/monthly summaries
Progress visualization
3. Goal Setting & Recommendations
Dietary goal input (weight loss, muscle gain, maintenance)
Personalized recommendations
Progress tracking against goals
4. User Interface
Camera integration
Food logging interface
Dashboard with insights
Technical Implementation Approach:
For iOS Development:
Language: Swift/SwiftUI
Image Recognition: Core ML with a pre-trained food recognition model
Database: Core Data for local storage
Nutrition API: USDA FoodData Central or Edamam Nutrition API
Camera: AVFoundation framework
Key Challenges & Solutions:
Food Recognition Accuracy: Use multiple data sources and allow manual correction
Portion Estimation: Implement reference object detection or manual portion selection
Nutrition Database: Combine multiple APIs for comprehensive food data
Would you like me to create a prototype interface to show how this might look and work? I can build a React-based demo that simulates the core functionality, or would you prefer I focus on a specific aspect like the technical architecture or food recognition strategy?
Also, what's your experience level with iOS development? This will help me tailor the guidance appropriately.
And ask me my experience level with iOS development?
I answered "Beginner"
It then proceeds to provide me a beginner-friendly learning path with a phase by phase approach!
And then even generated an example UI

WHO IS THE WINNER?
Without actually testing out the code, so far the winner for me is Claude, I really appreciate
Gain additional understanding: It prompted me for feedback to get an insight on my experience before providing me the solution
Immediate feedback: it gave me a sneak preview of the end product to ensure that the "AI" understood my request
Both above gave me the confidence that the solution provide will meet my specific needs, also breaking down the tasks into smaller pieces makes the process much more digestible .
IN SUMMARY
In terms of user friendliness, so far Claude.ai is the winner, be sure the check back on the result of the actual AI generated code comparison!
Comments