My AI usage Dec 2025
As a senior trainer, technology enthusiast, and someone who has recently focused on leadership, the rise of AI in the last couple of years is a phenomenon I have watched closely. Well, everybody does, of course.
Let’s break down the interfaces of these AI models that are generally available:
- Main app
- Embedded in other app
- API based
1. Main App
Every major AI model has its own app: ChatGPT, Claude, Gemini, Grok, and Meta AI. Some are designed to be a super app, a one-stop-app where every service the model can provide is accessible from the app. With the advent of MCP, their functions are easily expanded by providing the context and tools enabled by the MCP servers.
The apps I’m talking about here are desktop apps, not mobile apps, as desktop apps are generally more flexible and can therefore be easily extended. Mobile apps are limited by the capabilities of the operating system, which usually has security measures in place to limit access to sensitive data.
The main function of the apps is the chat itself, and it is notable that the best models also provide additional functions to differentiate themselves.
ChatGPT has voice mode, custom GPTs (allowing users to create their own version with a possibly different personality), and is able to interact directly with other applications in the desktop app by giving it granular access.
Claude boasts a large context window, giving it access to previous long conversations and the ability to process data from large documents. This capability is further extended by its ability to connect to external servers. Claude has Connectors, which are able to access remote MCP servers, and Extensions, which are able to access local MCP servers.
Gemini, Grok, and Meta AI are primarily web-based or mobile app-based, and they leverage their own ecosystem for access to real-time information. While Gemini and Grok can access external MCP servers, this access is typically from the CLI or developer tools, not for the regular users. Meta AI currently does not have this capability.
2. Embedded in Other Apps
The definition of embedded usage is when the AI model is accessed by other applications, and not provided by the model provider themselves.
Popular application usages with AI enabled include:
- Camera/photo editing & generation: These functions allow users to remove backgrounds, enhance image quality, change the styling of photos, or generate entirely new images.
- Transcription & translation: This covers capabilities such as speech-to-text conversion, automatic generation of meeting notes, live captions, live translations, and powering voice assistants.
- Writing Assistants: These tools provide grammar and style suggestions, content generation, and are often used to power chatbots for knowledge bases and complaint handling.
- Recommendation, Personalization, and Prediction: These functions suggest features or products, customize the user interface, and predict user behavior.
- Image/sound recognition: This includes capabilities like face and object detection, as well as biometric authentication.
3. API based
API-based interactions are usually not accessed by the regular, non-technical users. Although with the rise of low/no-code app development and vibe coding, this can also change in the future.
Of course, the embedding in other apps above can also use APIs to access the models, but here we’re talking about strictly accessing the API directly, whether by CLI or configuring a model’s API endpoint in an app config file. This direct API access is primarily used for application development, including designing, coding, and testing new features and products.
My AI Usage in December 2025
One of the most important things I learned for prompting is that you can (and you should) ask the AI to do a loop or break down the task into multiple steps until you’re satisfied with the final result. And you have to make sure the AI will ask you first before going to the next loop or step. This way, you can change things in the middle of the process, and not create a long and complex prompt but get a wrong result.
Now, I’ll explain how I use AI in 3 categories: work, school/master degree, and personal.
Work
I utilize AI for both Vibe Coding and Writing.
For Vibe Coding, the process involves:
- Designing: I use ChatGPT for designing, specifically leveraging its canvas capability for UI/UX. This way, I can see the output and I also can fine-tune the design. The design code uses JSX format, and I can export it and continue in another detailed design app. Most of the time, I continue designing in the IDE such as VS Code. I found it best to create the general design using AI first, then continue to work on the rest of the detailed design, instead of letting the AI design the whole completed and complex design.
- Coding & Testing: I also use ChatGPT and its integration with Visual Studio Code and GitHub. I only turn to Claude if ChatGPT is unable to solve a particular problem. Judging and reading how most of the people with my similar background use AI, I agree that the best way to use vibe coding is to create a simple, single-function app or tool. I need to look more into how big tech programmers are using vibe coding, considering the scale and scope of their apps.
For Writing, I use AI to create lengthy office memos in Indonesian and to produce standardized documents like SOPs, contracts, and user guides. The workflows for them are similar:
- I create the document structure in bullet points.
- ChatGPT completes the sentences and paragraphs.
- I then edit the results to match my personal style of writing and check the final result for typos and grammar correctness.
This process significantly speeds up my writing process and sometimes can provide the momentum to write continuously even without AI assistance.
As for Scheduling, due to the recent change of scope in my day job, I found myself with much smaller projects and tasks, and I have tried some task, calendar, and project tools with AI but have not yet found success. This suggests that the tools might be better with large, complex projects and a greater number of participants. I would love to see implementation examples from larger teams.
For Meeting notes transcription & follow up, I use Zoom, ChatGPT, and Mem.ai to summarize the meeting and create action plans and follow-up items. I can’t stress enough the benefit these tools bring for this kind of activity, and I think every team in every organization needs to have these enabled in their offline and online meetings.
Master Degree
I’m still in my first semester in my Master's program, and I found myself relying on AI for:
- Writing: All assignments, journals, and reports are assisted with ChatGPT.
- Data analytics tools assistance: I use ChatGPT and Claude to assist with apps and programs like RapidMiner/AI Studio, Google Colabs, and Python programming.
- Learning & completing quizzes and tests: I import all class materials into Google NotebookLM for assistance.
- Classes transcription: I use MacWhisper/Whisper Transcription and ChatGPT for transcribing and cleaning up the result, respectively, of the class sessions.
Personal
For my personal life, I use AI primarily for:
- Photo editing & generation: I use the iOS Photo app and Gemini for background and object removal for my photos. I also use Gemini to create a professional profile picture for my side hustle. And if you haven’t tried Gemini’s Nano Banana model for image processing, you should. It’s that good.
- Searching: Now I find myself using Google Search AI for search a bit more, but my default search engine is still Kagi for privacy reasons. Their AI summary results are still not as good as Google’s, but there is room for improvement, of course.
More AI Resources
Here are some articles & blog posts about AI that you shouldn’t miss:
- Minimum Viable Curiousity by Michael Loop
- How I use AI (Oct 2025) by Ben Stolovitz → the inspiration for this post
- Measured AI by Gina Trapani