Little annoyances with AI

I have been diving into AI tools for the past few months more seriously, and with that comes wonder, joy, fear, and sometimes small things that catch my attention. This is my recap of some of those little annoyances.

The unclarity of tokens or credits

Everyday I try at least two new AI tools to figure out what it exactly does. I make an account with my Google email address made for these AI test purposes and get started. Each prompt costs some tokens or credits, and I think it does cost something extra if the prompt or requested output is more complex.

While I understand the tools I try out are not philanthropic and I appreciate you get some freebies before you buy, the tokens or credits available and left are always unclear.to me. What do I get for what??

In most cases it is just unclear what costs what exactly, as there is not really a consistent model. Often I have no clue what to expect.. Sometimes it is a number of tokens you see being reduced after you got the prompt’s output. Sometimes it is a reduction of credits for which you get a few, sometimes it is a percentage (of what?), etc..

Obviously, this directs to a pay/subscription method. But in most cases, (pretty much every case), I’m not interested in this. The tools are often easily exchangeable and easily forgotten. Personally, I would prefer a reset of tokens, credits or percentage, if you return to the website, for example a week later. This can be done with a friendly reminder, they do have my Google email account. Surely, each company must see a lot of users being inactive or just trying some stuff.. I would say, guide us!

In the end, I would love to have an explanation what the tokens, credits, percentage actually mean. What did I do at what cost, what’s left exactly, what does a new prompt cost, etc. If I understand the cost, I’ll (probably) use them with more consideration and liking for the tool!

The technology of prompting

The fine art of prompting.. to me it is a challenge, a success and annoyance all at once..

You have to learn explain something in ‘human-like’ language with a technical twist, to a tool that at times has the understanding of a 5-year old but the brainpower of a whole society. I have never given instructions to a person or a tool by writing prompts. It just feels ‘off’, by being bluntly direct and result focused, though including saying ‘please’ helps. (Why? Check this article).

In this other article, it is mentioned: “Generative models don’t know what you don’t tell them. They will eagerly make assumptions, and it can be hard to make large changes once those assumptions have been made”. I do find this striking when I look at how I often have received design briefings in my career (e.g. my business stakeholders writing a ‘prompt’ for a designer or team). We often miss a lot of information that we go out and (user) research, we don’t just make assumptions, and we certainly need to be flexible to make large changes. We also do not change our answers or behaviors slightly every time we give an answer. In that way, it feels wrong to adapt yourself to a technology instead of having the technology adapt to you.

This is what intrigues me, the ‘relationship’ you as a person have to have with the AI tool. I feel at times like an ‘AI tool parent’ that has to treat the ‘AI baby’ in certain exact ways, with tips and tricks etc. In this way we humanize tech, while I feel it should always be ‘human first, technology second’. When I read a comment in the instruction video of the AI tool UX pilot: “It is not the problem of the AI, but your problem and your prompt” I seriously think we are on the wrong path. We are not humanizing tech, we are technologizing humanity…

The art of waiting

When you finally have ‘perfected’ your prompt, the next step is to hit that ‘submit’ button and wait for the results! But wait… when do you get the results? In many tools there is a progress indication, but more often than not, I got bored and moved to another tab in my browser, remembering the prompt result a few minutes (or hours) later.

Luckily, I do see good examples to keep the user informed and entertained. In most cases, there is textual or visual feedback the tool is working. More often than not, it cannot predict ‘exactly’ when the output is ready. Only the tool ‘Picture to AI drawing‘ is the only tool so far that actually predicts in seconds and percentage how long it will take. It is not super accurate, but I do prefer this over a visual animation or vague UI text that informs me only so much.

The new type of abbreviations

I come across new abbreviations every day. Since I work for a large corporate, abbreviations are not unknown to me and I have grown quite used to them. Over the years, you just know what is the most likely word behind the letter. ‘C’ stands for Clinical or Chief, ‘D’ stands for Data or Decision, ‘O’ stands for Office or Organization. Quite often you can quickly guess what is meant from the context.

In this new AI domain, I also see abbreviations every day. However, it has no correlation to what I expect it to be… Abbreviations like RAG, S-REF, IRQ all have unexpected meanings and ways of abbreviating that are either too technical, too far-fetched or perhaps too smart for me…

For example:

  • RAG – Retrieval-Augmented Generation = a technique used to improve the performance of Large Language Models (LLMs) by providing them with external, up-to-date, and domain-specific information before they generate a response.
  • S-REF – Style REFerence = a feature in Generative AI that allows a user to provide an image of a specific style or aesthetic, which the AI model then uses as a guide to generate a new image.
  • IRQ – IRreplaceable Quotient = a new framework designed to measure and cultivate uniquely human abilities that cannot be easily replicated by AI.

Without looking it up, I would have never guessed that, and probably.. I will not remember it as well..


What do you think? Let me know if you have similar or other ‘little annoyances with AI’ in the comments or via a DM (Direct Message ;))

Sources & Acknowledgments
Posted in

Leave a comment