When working with GPT, the key to getting accurate results lies in the correct creation of your prompts (the very prompts you use in the CyberSEO Pro and RSS Retriever post title and post content assignments). It often seems that the plugin doesn’t work as expected – it doesn’t fetch the full article, loses the original formatting or simply generates incorrect or empty output. The problem is not with the plugin, it’s with your prompt.
Since access to OpenAI’s GPT-4 model is still not available to everyone, and the API for this model is quite expensive to use, you’ll probably have to work with GPT-3.5 models most of the time. While these models may lag behind GPT-4 in terms of intelligence, they are still capable of producing high-quality content that rivals the results of GPT-4. The main difference is not in results, but in prompt engineering, which we’ll explore in this article.
Let’s start by exploring a couple of critical nuances that will directly affect the content generated by the plugin in automatic mode. These nuances will make a dramatic difference.
HTML markup
If you plan to process articles with HTML formatting (in CyberSEO Pro and RSS Retriever plugins, article text in HTML format corresponds to the %post_content%
shortcode), remember to instruct GPT to return the result with HTML formatting. Only in this case you will get a rewritten or translated article with the original HTML structure intact, including styles, headings, links, tables, images, etc.
[openai_gpt prompt="Translate the following article into French and return the result in HTML format: %post_content%"]
Note that the “return the result in HTML format” directive is critical if you want to process source articles that contain HTML markup and get the result with the original HTML structure preserved.
Original language of the article
If you want to rewrite text written in a language other than English, but you write your prompt in English, GPT 3.5 will likely return the result in English, even if you explicitly request it in the original language. In some cases, it may even respond with “I cannot reword this”. However, if your prompt itself is written in the language of the original article, you will get the result you expected:
[openai_gpt prompt="Reescriba el siguiente artículo con un estilo informativo y devuelva el resultado en formato HTML: %post_content%"]
Precision of your GPT prompt
All GPT prompts should be as concrete as possible, leaving no room for ambiguity. For instance, if you want AI-generated content to include HTML elements, specify it explicitly, as described above. At the same time, if you want certain elements like <h2>
headings to be generated, state it directly. Specify whether to use bullet lists and bold or italic text for emphasizing important elements. Perhaps you don’t need a “Conclusion” section at the end of the text – be sure to mention that in your prompt. Want the result formatted according to a specific HTML template? That’s also possible:
[openai_gpt model="gpt-3.5-turbo-16k" prompt="Create pros and cons of olive oil clearly separated in a bullet-point HTML format. Ensure the content is engaging, human-like, and includes natural keyword usage for SEO optimization. Use the following HTML markup template (<h2> tag for a heading): <div class='wp-block-group'><div class='wp-block-columns has-background' style='background-color: #eaebed; border-radius: 10px; padding: 16px;'><div class='wp-block-column'><h2>PROs</h2><ul><li>Pros</li></ul></div><div class='wp-block-column'><h2 class='cons-uline'>CONs</h2><ul><li>Cons</li></ul></div></div></div>" max_tokens="1500" temperature="0.5"]
Raw text processing
If you don’t plan to process HTML content and want plain raw text, use the %post_content_notags%
shortcode instead of %post_content%
. This shortcode will strip your article of all HTML elements and pass it to the GPT model as plain text. What does this give you? Faster processing speed, a significant advantage in terms of the maximum length of processed content (HTML code is much heavier than plain text), and savings on the fees OpenAI charges for using its models – the more compact the processed articles, the less you pay!
Choosing the right GPT model
Speaking of GPT-3.5, there’s another important detail to note. OpenAI offers two similar but different models, namely OpenAI GPT-3.5 Turbo and OpenAI GPT-3.5 Turbo Instruct. The cost of using the API for both models is about 10 times lower than the cost of the GPT-4 API, making them extremely attractive for autoblogging. Especially tempting is the GPT-3.5 Turbo 16K model, which allows you to process texts of up to 16,384 tokens, which is quite substantial.
However, you should consider the primary purpose of the model. GPT-3.5 Turbo is a chat model and works on the same principle as ChatGPT, where it is used together with the GPT-4 model, which is also designed exclusively for user chat. This makes both models less suitable for text processing, according to your exact instructions. Both may try to converse with you instead of simply following your instructions. This behavior often leads to unexpected results, such as adding “authorial” comments and remarks to the generated text. To avoid such surprises, it is recommended to use the new OpenAI GPT-3.5 Turbo Instruct model, which is a direct successor of legendary Davinci. The only significant drawback of GPT-3.5 Turbo Instruct is its limitation to 4,096 tokens for the processed content. Unfortunately, our world isn’t perfect…
Simultaneous use of multiple GPT models
Note that the [openai_gpt]
shortcode allows you to select the exact GPT model you need for each particular task, and also allows you to set various parameters such as the maximum number of tokens and the model’s temperature (creativity). So you can mix different GPT models with different parameters in the same HTML template. This feature sets CyberSEO Pro and RSS Retriever plugins apart from other feed syndicators and no-source content generators for WordPress.
In this article, the author discusses key points related to working with GPT-3.5 models for generating high-quality content. The article focuses on the importance of prompt engineering and provides actionable advice for achieving accurate results.
The first point highlighted is the need to instruct GPT to return results with HTML formatting when processing articles with HTML markup. This ensures that the rewritten or translated article maintains the original HTML structure, including styles, headings, links, tables, and images.
The article also emphasizes the impact of the original language of the article on the generated content. If the prompt is written in English for a text written in another language, GPT 3.5 may likely return the result in English. However, writing the prompt in the language of the original article ensures the desired outcome.
The precision of the GPT prompt is crucial. The prompt should be specific, leaving no room for ambiguity. It is recommended to explicitly specify elements like HTML tags, bullet lists, and formatting preferences.
For processing plain raw text instead of HTML content, the article suggests using the %post_content_notags% shortcode. This allows for faster processing speed, a higher maximum length of processed content, and cost savings in terms of fees charged by OpenAI for using their models.
Choosing the right GPT model is another important consideration. While GPT-3.5 Turbo and GPT-3.5 Turbo Instruct are similar and cost-effective options, it is noted that GPT-3.5 Turbo Instruct is better suited for text processing as it avoids conversational behavior and unexpected results. However, it has a limitation on the number of tokens for processed content.
The article also highlights the flexibility of using multiple GPT models simultaneously with different parameters, allowing for customization and optimization based on specific requirements.
Considering the long-term implications and possible future developments, it can be expected that access to newer models like GPT-4 will become more widely available. This could lead to advancements in intelligence and further improvements in prompt engineering. It is important for users to stay updated with the latest developments in GPT models and adapt their strategies accordingly.
Based on these insights, actionable advice for users would be to:
1. Pay attention to prompt engineering: Be specific and explicit in prompts, including instructions for HTML formatting, language requirements, and desired elements.
2. Experiment with different GPT models: Explore the suitability of GPT-3.5 Turbo Instruct for text processing tasks and consider the limitations of token count.
3. Stay informed about advancements: Keep up-to-date with the latest developments in GPT models to leverage new features and improvements.
4. Optimize cost and processing speed: Choose between HTML content and plain raw text based on specific needs, considering factors like processing speed and fees.
5. Utilize multiple GPT models: Take advantage of the flexibility provided by the [openai_gpt] shortcode to mix different GPT models and parameters for enhanced customization.
In conclusion, working with GPT-3.5 models requires careful prompt engineering and consideration of factors like HTML formatting, language requirements, precision of prompts, and the right choice of GPT model. Users should stay informed about advancements in GPT models and adapt their strategies accordingly to achieve optimal results.