I was curious how well the AI would do at creating sentences that were the opposite of what I wrote. I wanted full opposite-ness, as much oppositeness as I could get. If I type in “The black cat sat.” I’d want something like “A white dog stood.” With my admittedly minimal effort, I got minimal results. Mostly just switching the subject. It did get me thinking that this would be an interesting pattern.
There are two places to mess around with this.1 One is who can generate the prompt that gets the outcomes closest to the goal (whatever it is) most consistently and the other is whether the humans involved can do a better job.
It’s a simple concept, but one that could get to an interesting place in a number of subjects and a number of aspects of the LLM. I could see doing it with little code challenges. I could see doing it with short writing/images meant to be in a certain style. People will be refining prompting strategies and evaluating output. Then you’ve got a human challenge to better the machine output. It’d be an interesting mix of prompt understanding, LLM capability testing, and human ability challenges.
With a little effort you could make an app/website that would spin up these little activities and allow voting etc.
To clarify it a bit better . . .
set a goal
Come up with a short and specific thing you want. It could be text with certain parameters, code to do a specific thing, an image in a particular style/subject . . . whatever. It does need to be really specific. I’d suggest keeping it pretty short.
Generate the rubric that will enable you to evaluate the product based on your criteria. Revise it as needed as you start to use it.
prompt work
Now people can try to generate the prompt that will give the best results. You can set some parameters here depending on your time/interest. It could be one single prompt and you evaluate 3 responses with your rubric. Or you could allow an initial prompt and one corrective prompt.
Figure out the best prompt by which results ended up with the highest cumulative rubric scores. I think you’d need to evaluate a couple of responses per prompt to get rid of fluke responses.
vs the humans
Once you’ve got the prompt that does the best job, use it to create some number of example items. Have people generate their own best responses. Rate them all via the rubric (but without knowing if it was AI or human). Or you could do a faster head-to-head comparison.
1 I think there’s a third space where you come up with prompts that try to tank the LLM’s abilities. Kind of a best prompt/worst outcome scenario.