By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

What we learned from 500+ prompts our users Asked Wudpecker

November 24, 2023
Read time
Min Read
Last updated
March 21, 2024
Hai Ta
What we learned from 500+ prompts our users Asked Wudpecker
Table of contents
Share article:

Two weeks ago, we launched Ask Wudpecker, a feature that allows users to ask anything and everything about their meeting.

After 500+ prompts, here are top 5 things our users prompted.

Breakdown of Wudpecker users' top 5 prompts

(1) Searching for meeting details (33.4%)

The most popular type of prompts involves searching for information from a call:

  • What’s their budget?
  • What meetings need to be scheduled?
  • Where is this <information/sentence> mentioned?
  • What is needed from my side?

As we developed Wudpecker, we learnt that 90% of people don’t want to watch recordings for information extraction purposes. Watching 30-60 minute-long call to get 1-2 minutes worth of information is simply too costly.

So personally, I am very happy to see that our users are using Ask Wudpecker the way we intended.

(2) Follow-up (20.8%)

Some example prompts for this theme:

  • Post call thank-you email and next steps;
  • What should we email Dan;
  • Write a follow up email.

One must not forget the grueling task of writing a follow-up email, which easily takes up to an hour, if you are a perfectionist.

I’m not naive to think that AI can yet draft up an email that’s immediately sendable. It requires training to adapt the tone of the email to your liking.

(Which is not that complicated, btw. Make a more elaborated prompt: Draft a follow-up email in a similar style as this <previous email you have written>).

That said, a draft email cuts down roughly 10-15 minutes worth of work. So you can go straight to refining it.

(3) Tldr (Too long didn’t read) (17.5%)

The typical prompts for this theme are:

  • tldr;
  • tldr in x sentences;
  • Overview of the call.

Even though Wudpecker generates a summary by default for every meetings, users still would like to get an even shorter version of the summary. This makes sense.

Wudpecker’s default summary tends to be more detailed, to make sure that minimal information is lost. So for users who just want to get the gist of the call, they prefer it to be in a couple fo sentences.

(4) Action items (8.6%)

The prompt for action items are as follows:

  • What are the action items listed in the transcript?;
  • What are Mel’s action items?

We noticed that a sizable amount of prompts are asking for action items. These are asked in older meetings, where we have not yet introduced default action items generation by Wudpecker.

This validates that our users would like to have action items created for every meeting. After all, a meeting without next steps is a meeting wasted.

(5) Translations (7.4%)

This involves translating the language from English to any range of languages from German, Spanish, to Japanese.

Many were testing out our AI capabilities to translate conversations. This is a clever way to leverage Large language models (LLMs) like Wudpecker’s to do the grunt work of transcribing. This is often a time-consuming job.

Final thoughts

ChatGPT ushered in a whole new era of ways we can exponentiate our productivity. Everyone is still figuring out which use cases matter, and which are fluff.

Through this small analysis exercise, we have a better understanding of what our users intuitively care most about. But it is still just the beginning!

Automatic quality online meeting notes
Try Wudpecker for free
What we learned from 500+ prompts our users Asked Wudpecker
Min Read
What we learned from 500+ prompts our users Asked Wudpecker
Min Read
What we learned from 500+ prompts our users Asked Wudpecker
Min Read