Imagine if you could change how you prompt ChatGPT with just a simple set of words based on emotions, and the results would become exponentially better? Our research and current findings have confirmed that a new approach to prompting ChatGPT by utilizing human psychology factors that AI researchers couldn't predict has been created. And this is a surprising revelation about how and why it works.
In simple terms, a simple question yields a simple answer. This is the reality of how we prompt Large Language Models like ChatGPT. Many people, even AI experts, approach prompts in a simplistic or even subpar manner, as if they were talking to a computer. This is the typical "Question/Answer" approach, known as Input/Output (I/O) prompting. The results when using this technique always take the simplest path through the parameter data used to build Large Language Models. Mathematically, it's the simplest neural transformation path. While it may work for very basic questions, it turns out that this method consistently, I would say always, produces subpar results based on any metric you want to use. Therefore, it can be inferred that simple prompts need to get closer to Super Prompts, where a set of ideas is presented to the AI model to shape and generate the best possible results.
With the I/O prompting method, even if you use subsequent questions, you're walking on a "neural path" that leads to diminishing performance. This is one of a thousand reasons I invented Super Prompts. Concerning the "neural path," what's essential is that we "activate" neural pathways present in the model but not on the central path. The results are much more powerful than simple I/O-style prompts. One aspect of Super Prompts is the factors we use to make it work. There are thousands of parameters to consider, but a precise sentence at the beginning of many of my Super Prompts enables the Large Language Model to consistently produce good results. And now, a new paper from a university has been published, confirming this through their experimental research. I can say it uses human psychology to achieve these outstanding results. This is not surprising because Large Language Models like ChatGPT are built on human language, and humans are emotional beings, which is reflected in our language.
In our research on testing the best prompts for ChatGPT, we conducted over 13 million tests on how the system responds to prompts, and we found some astonishing and quite unbelievable discoveries. What did we find? We'll explore this in more detail in a dedicated article for members, but I can say that the traces of what we found are now supported in academic papers. I'll also say that what we found has left researchers and AI scientists completely perplexed. It uses natural language as a means to express problem optimization. This makes the process intuitive and understandable, bridging the gap between human intuition and machine processing. By presenting issues in natural language, it makes optimization tasks more accessible to a broader audience.
In the dedicated members' article, we'll delve into the story of the new simple sentence added to Super Prompts and demonstrate why and how it works. We'll also introduce you to 100 useful and varied phrases that generate astonishing results. This is part of the tools we use in the Prompting Technique, and it makes believers that this is not a field that looks somewhat lacking in knowledge and information. There are many powerful ways to prompt AI for excellent, if not unbelievable, results, and this is just one of them.
You can review more articles about AT technology
Comments