Evaluation
How We Evaluate Projects
Understanding the evaluation criteria will help you build a winning project
Evaluation Criteria
💡
Innovation
Creativity and uniqueness of your solution
25%
🤖
On-Device AI Integration
Effective use of RunAnywhere SDK and local models
30%
✨
User Experience
Design, usability, and polish
20%
⚙️
Technical Implementation
Code quality, architecture, and best practices
15%
🎯
Impact
Potential real-world value and usefulness
10%
Total100%
Judging Process
1
Initial Review
All submissions are reviewed for completeness and adherence to guidelines
2
Technical Evaluation
Judges assess code quality, AI integration, and technical implementation
3
Demo Review
Demo videos are evaluated for functionality, UX, and innovation
4
Final Scoring
Projects are scored based on the evaluation criteria and ranked
What Makes a Winning Project?
✓ Do This
- •Solve a real problem with a clear use case
- •Effectively leverage on-device AI capabilities
- •Create a polished, intuitive user experience
- •Write clean, well-documented code
- •Demonstrate privacy and offline benefits
- •Show creativity and innovation in your approach
✗ Avoid This
- •Building without a clear problem to solve
- •Using cloud APIs instead of on-device AI
- •Poor UI/UX that confuses users
- •Messy, undocumented code
- •Incomplete or non-functional features
- •Missing or poor quality demo video
Focus on Quality Over Quantity
It's better to have one well-implemented, polished feature that showcases on-device AI than multiple half-finished features. The judges care about execution and impact.