Generative AI

Gen AI on RTX PCs Developer Contest

Enter to win a GeForce RTX 4090 GPU, a GTC conference in-person pass, and more.

Build your next innovative generative AI project using NVIDIA® TensorRT™ or TensorRT-LLM on Windows PC with NVIDIA RTX™ systems for a chance to win an RTX 4090 GPU, a GTC conference pass, and more.

The contest ran from January 8, 2024 to February 23, 2024. See the winning projects and honorable mentions on the contest winners page.

See the contest Terms & Conditions.

Explore the Possibilities

Create your generative AI project in one or more of these categories.

Text-Based Applications

Whether it’s a RAG-based chatbot, a plug-in for an existing application, or a code generation tool, the possibilities are endless. Build generative AI projects that output text, featuring a primary model accelerated with TensorRT-LLM on Windows PCs for RTX. 

Developers can choose to use a pre-optimized model for RTX (Llama 2, Mistral 7B, Phi-2, Code Llama), any community model optimized for TensorRT-LLM, or your own optimized model for TensorRT-LLM.

Visual Applications

Build projects based on visual media for image, video, or 3D graphics generation, with the primary model accelerated using TensorRT on Windows PC for RTX. 

Developers can choose models such as Stable Diffusion, Stable Video Diffusion, and Stable Diffusion XL. The output for projects in this category must be primarily visual media (images or video).

General Generative AI Projects

This open category lets you choose to build almost any generative AI application or project. Example projects include building multi-modal applications, optimizing community models, developing connectors to community projects, or accelerating existing generative AI pipelines. As with the other categories, the primary model must be accelerated with TensorRT or TensorRT-LLM.

Contest Process

Step 1: Start Now

Get started using our generative AI developer guide for Windows PC.

Explore ChatWithRTX and the Continue.dev VS Code Assistant running locally reference application on GitHub.

Connect with the community of RTX developers and NVIDIA technical experts on the NVIDIA Developer Discord channel and NVIDIA Developer Forums.

Step 2: Set up and Build Your Project

Set up your development environment and build your project. Use TensorRT or TensorRT-LLM to accelerate your primary generative AI model on RTX systems on a Windows PC.

Step 3: Share on Social

Post a 45 to 90-second demo video of your AI on RTX project on Twitter, LinkedIn, or Instagram using the hashtags #GenAIonRTX #DevContest, and #GTC24. Also, tag one of these NVIDIA social handles:

Twitter (X): @NVIDIAAIDev
LinkedIn: @NVIDIAAI
Instagram: @NVIDIAAI

Step 4: Submit Your Entry Form

Once completed, submit all your assets including links to the source code, demo video, social post, and any other supplementary materials.

Prizes

Three Winners Each Receive

  • NVIDIA GeForce RTX™ 4090 GPU
  • GTC 2024 4-day in-person conference pass ($2,095 value, not redeemable for cash)
  • $500 to cover partial travel expenses to NVIDIA GTC 2024 (collected onsite at the event)
  • NVIDIA Deep Learning Institute GenAI/LLM course
  • Your entry being highlighted and promoted by NVIDIA

See the contest Terms & Conditions

Winner Selection Criteria

Qualifying submissions will be judged by:

  • Demo: Overall impression of the demo for its target audience.
  • Impact and Ease-of-Use: Relative impact of the project, and how easily the project is usable by its target audience. 
  • Technology: How effectively the developer has integrated or used NVIDIA’s technology stack for their application or project.

Get Started With Developing for Windows PC on RTX Systems

Explore the Getting Started Guide Now

Get immediate access to developer resources to streamline your generative AI development journey on Windows PC with NVIDIA RTX systems.

AI Chatbot With Retrieval-Augmented Generation Running Locally

Learn about this developer reference project for creating Retrieval Augmented Generation (RAG) chatbots on Windows using TensorRT-LLM.

Get Started With TensorRT

This library gives you an easy-to-use Python API to define AI models and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs.