Zero-shot prompting leverages large language models (LLMs) to perform tasks without explicit training on labeled data. By providing instructions as natural language prompts, LLPs adapt to new tasks by comprehending and executing the instructions. This approach empowers LLMs to handle a wide range of NLP tasks, including question answering, text generation, and image captioning. Despite its benefits of flexibility and ease of use, zero-shot prompting may face limitations in generalizability and bias. Nonetheless, ongoing research strives to enhance its capabilities and address these challenges.