AI Functions
complete
Doran Pauka
When planning from the chat, like telling ai a task is done, the chat says it will be marked as done but the task is not marked as such
Log In
Martin Adams
complete
Released this past Friday, August 4.
Martin Adams
Merged in a post:
Reduce AI Token Usage
Martin Adams
Each task being added uses almost 1k tokens, which only lets users add up to 100 tasks / mo (in Premium) or 250 tasks / mo (in Business). Either reduce the token usage or further refine token tracking.
Martin Adams
Merged in a post:
I added a few reminders, and then the AI stopped auto tagging or parsing them
Tashfin Awal
Martin Adams
in progress
Philippe, we’re making a few changes to how we process task entry (using AI functions in the background). What this means concretely is that while you’ll no longer see the input field colored, it will properly understand your meaning and also automatically adjust your task name.
Martin Adams
Merged in a post:
Ask AI for next task
Doran Pauka
Ask AI which task to execute next, based on the available time, my location and task schedule
Martin Adams
planned
Thank you for reporting this, Doran. It’s actually a poor user experience, since we haven’t yet connected the AI to the task list! We’ll be doing this in the coming week by enabling AI functions (i.e. things that the AI can do, like add tasks to the list).
Martin Adams
Use GPT-3.5-turbo for some AI calls.
Martin Adams
4 simple ways to reduce the cost of using GPT-4 by up to 98% (and improve accuracy at the same time).
- Combine prompts: Join two or more queries to make a single API call instead of multiple separate calls.
- Prompt selection: Decrease prompt size by selecting a small optimal subset of examples for the LLM.
- Caching: Store and reuse the LLM’s response when a similar query is asked.
- LLM Cascading: First send the API request to a smaller model, then move onto a larger model (i.e. GPT-4) only if results are insufficient.
"Our experiments show that FrugalGPT can save up to 98% of the inference cost of the best individual LLM API while matching its performance on the downstream task. On the other hand, FrugalGPT can improve the performance by up to 4% with the same cost."
- authors: @james_y_zou, @matei_zaharia
paper: arxiv.org/abs/2305.05176