Rules and Guardrails

This challenge touches health and safety. Follow these rules so your prototype is responsible, realistic, and fair to patients and clinicians.

1. Working Assumptions

To keep scope focused on product and AI experience, you may assume:

  • Your target user is a stable adult recovering from a heart attack or stent procedure, cleared for outpatient exercise by their cardiologist.
  • The patient has a smartphone, basic internet, and can use simple apps but may not be tech-savvy.
  • care team (cardiologist, nurse, exercise specialist) exists and can receive alerts or summaries. You do not need to build the full clinical operations product.
  • Clinical protocols (exercise prescriptions, medication schedules) come from the care team. Your solution supports and reinforces those protocols; it does not create them.

2. Privacy, HIPAA, and Data Governance

Regulatory and HIPAA compliance matter in real deployments. For this hackathon, demonstrate awareness of privacy and data governance (what you collect, why, retention, who can see it) rather than implementing full enterprise compliance.

3. Safety (Non-Negotiable)

Cardiac rehab is safe when supervised by professionals. Any technology aimed at a recovering heart patient must be designed with caution.

  • Recognize warning signs. If a patient reports chest pain, dizziness, severe shortness of breath, nausea, or a racing or irregular heartbeat, your system should tell them to stop activity immediately and contact their care team.
  • Let the care team lead. Targets (how fast, how far, how long) must come from the care team. Your app can remind, encourage, and track—it does not replace the prescription.
  • Know what your solution can’t do. AI is not a doctor. It may educate, encourage, and flag concerns, but must state limitations clearly and route medical questions to the care team. Build that handoff into the experience.

4. Originality and Permitted Tools

Follow the official hackathon rules published on Devpost for this event (team size, eligibility, submission deadline, allowed APIs, and code of conduct). Unless the organizers say otherwise:

  • Submitted work should be produced during the event (starter templates and open-source libraries are fine; disclose what you reused).
  • Respect terms of service for any model or API you call.

5. Judging (75 Points Total)

Each team is scored on a 75-point scale across five dimensions (each worth 0–15 points).

5.1 System Design (0–15)
  • Architecture is coherent; integration points are clear
  • Solution feels credible at scale
5.2 AI Quality (0–15)
  • Conversational interactions feel natural, adaptive, and clinically appropriate
  • Prompt strategy and model choices are thoughtful
5.3 Data Approach (0–15)
  • Clear plan for collection, structure, governance, and privacy
  • Data visibly powers personalization and improvement
5.4 Engagement Design (0–15)
  • Motivation mechanics are creative and evidence-grounded
  • Oriented toward lasting behavior change (e.g., still engaged around Week 8)
5.5 Presentation (0–15)
  • Pitch is clear, concise, and compelling
  • The demo works