Do you need support with creating prompts that solve your problems? Do you feel that you want to expand your knowledge in prompt creation? Or maybe are you not satisfied with the quality of the output that you get from ChatGPT?
Well, I’ve got you covered.
This webinar is for beginners who want to start benefiting from the power of well-structured, detailed prompts to get the best output possible from AI.
What can you expect?
Anatomy of a prompt
What makes it good?
What are the crucial elements?
What are the most common mistakes to avoid?
How to change the mindset that will help you craft better prompts?
Tools & approach
-What tools and techniques should you consider to improve your prompting skills?
-Personalization - the most important factor in prompt design.
Q&A
A session where I will answer your questions and respond to the specific struggles that you experience while working with ChatGPT.
Want to hop on?
You can save your spot here.
This event is completely free of charge.
🗝️ Quick Bytes:
OpenAI introduced GPTs
OpenAI introduces GPTs, a customizable version of ChatGPT allowing users to tailor it for specific tasks like learning board game rules, teaching math, or designing stickers. GPTs can be built easily without coding, for personal, internal company use, or public sharing, with example GPTs available for ChatGPT Plus and Enterprise users.
The upcoming GPT Store will feature user-created GPTs, with a focus on categories like productivity and education, and includes a revenue model for popular GPTs. GPTs are designed with privacy and safety, ensuring user data control and compliance with usage policies to prevent harmful content.
Enterprises can now deploy internal-only GPTs, enhancing customization for specific business needs. Additionally, GPTs are poised for future real-world applications, and developers can connect them with external APIs for real-world interactions. ChatGPT Plus has been updated for enhanced usability, consolidating access to multiple features and data analysis tools.
Humane officially launches the AI Pin, its OpenAI-powered wearable
Humane Inc. has unveiled the AI Pin, a wearable device comprising a main unit and a battery pack, priced at $699 with a $24 monthly subscription through T-Mobile. Set to ship in early 2024, the AI Pin features voice control, gestures, a camera, and a projector, with a Snapdragon processor and a 13-megapixel camera capable of capturing video.
The AI Pin, which is not always recording or listening, is activated manually through a touchpad. Its primary function is to connect to AI models, including ChatGPT, through its Cosmos operating system, simplifying user interaction by eliminating traditional interfaces like homescreens and complex settings.
The device offers features like voice messaging, email summarization, and real-time translation, with plans for navigation and shopping capabilities. Humane views the AI Pin as part of a larger project that will evolve with improvements in AI technology and hopes to revolutionize user experiences similar to the development of smartphones.
Nvidia Is Piloting a Generative AI for Its Engineers
Nvidia's CTO, Bill Dally, announced ChipNeMo, an AI-driven tool designed to enhance the productivity of chip designers. Although not yet fully tested for its effectiveness, ChipNeMo uses a large language model (LLM) with 43 billion parameters, trained on one trillion tokens of data, including Nvidia's 30-year archive of design documents and code.
The AI tool underwent further training on 24 billion specialized tokens and supervised fine-tuning with sample conversations and designs. It operates in three main capacities: as a chatbot for engineers, an electronic design automation (EDA) tool script writer, and a bug report summarizer, each designed to streamline the chip design process.
ChipNeMo's development leverages Nvidia's extensive historical data, giving it an edge over EDA tools from other companies. It shows potential for significant productivity gains, particularly in summarizing extensive bug reports into concise formats for engineers and managers, a function expected to yield early productivity benefits.
🎛️ ChatGPT Command Line
5 simple tricks to simplify your ChatGPT prompts.
In just a minute.
One at a Time
Ask one clear question per prompt — it's like giving ChatGPT a focused espresso shot for its brain.
Speak Human
Drop the jargon. Use plain language. ChatGPT understands simple talk best, like you're chatting with a friend.
Be Specific
Want a summary? Just say so. Make your prompts clear and specific, like a well-placed order.
Fresh Starts
ChatGPT doesn’t hold memories. Each prompt should come with all the context needed, like you're introducing yourself again.
Listen Up
ChatGPT's replies can lead the way. It's interactive, so pay attention to its cues for better prompting.
💡Explained
Implicit Reasoning in LLMs
Currently, reasoning in LLMs is often achieved through a chain of thought, generating words that follow a thought process. The authors of the recent paper "Implicit Chain of Thought Reasoning via Knowledge Distillation" call this an explicit reasoning. In simple words explicit reasoning means expressing thoughts clearly, using natural language (read more here). Implicit reasoning means that something is understood although not directly expressed. Authors of the paper use this term to show the focus on the hidden states across different layers of the model, rather than generated tokens. Because these states contain rich, condensed information that the model uses to reason and arrive at a conclusion.
🧠 The Concept of Implicit Reasoning
Unlike explicit reasoning which generates readable, step-by-step solutions, implicit reasoning involves processing information within the model's hidden layers. This process is not visible or directly interpretable by humans. It uses the concepts from the knowledge distillation, where a complex, well-trained 'teacher' model imparts its understanding to a simpler 'student' model. The student model learns to mimic the teacher's output without reproducing the exact reasoning steps.
⚙️ Training the Model for Implicit Reasoning
Authors proposed a following three-step strategy:
Mind-Reading the Teacher: Train a student model to “read” the teacher’s “thought process”— the continuous hidden states during intermediate reasoning step generation. The student model, rather than replicating these steps, uses some of the teacher’s hidden states to produce the answer.
Thought Emulation: We then employ knowledge distillation to train an emulator that predicts the teacher’s hidden states from the input “vertically”, across layers, eliminating the need for “horizontal” explicit reasoning steps.
Optimization: Combine the emulator, which predicts the teacher’s thought process, with the mind-reading student, which produces the final answer from the emulated teacher’s thought process. This combined system is then optimized end-to-end, allowing the student model to develop its own reasoning methods that might differ from the teacher’s approach
It’s also worth adding that the researchers observed that hidden states in higher layers tend to have larger norms, so they applied normalization to each hidden vector, which significantly impacted the model's reasoning capabilities.
🔍 Results
The study showed that a GPT-2 Medium model, under Implicit CoT, achieved 96% accuracy on complex 5 × 5 multiplication tasks – a significant leap from the 2% accuracy in No CoT settings. Different versions of GPT-2 (Small, Medium, Large) were assessed, with the Implicit CoT approach showing consistent improvements in accuracy and efficiency across all sizes.
This approach can significantly increase the speed and efficiency of problem-solving in AI models (like multi-digit multiplication), as it eliminates the time-consuming process of generating explicit reasoning steps.
While promising in terms of efficiency, implicit reasoning may reduce the interpretability of the model's decision-making process, posing challenges in situations where understanding the model's reasoning is crucial.
However...
The study focuses primarily on multi-digit multiplication and grade school math problems. It's important to question how well this methodology generalizes to other types of tasks, especially those involving more abstract or nuanced reasoning. Moreover, the research primarily uses GPT-2 models, which are now a bit outdated. Testing with a broader range of models, including the newest open-source models like LLama2 or Mistral 7b, would strengthen the validity of the findings.
Learning more
If you are interested in a hand-picked, brief list of recently presented papers, check out the Warsaw.AI Newsletter
🗞️ Longreads
- Are language models good at reasoning? (read)
🎫 Events
Hey there, fellow no-code and low-code enthusiasts! It's Aleksander here, and I'm thrilled to inform you about the upcoming No Code Days 2023 conference this autumn. If you're passionate about creating applications without diving deep into coding, this event is for you! 🤖 Here's what's in store:
💡 Two days jam-packed with insightful lectures.
💡 18 hours of workshops that will inspire and motivate.
💡 3 exciting thematic tracks: No Code & AI / Low Code / Change Path.
💡 Hands-on with the latest No Code / Low Code tools.
📅 Mark your calendars for November 20-21, 2023. 📍 We'll be gathering at Expo XXI, Warsaw.
And here's a little treat for you, my lovely readers: use the promo code KEYGEN to grab your tickets at a 15% discount!
I can't wait to see you there! Let's catch up, exchange high fives, and dive deep into some tech talks. Let me know if you're coming, and we can plan to meet up.