Skip to content

AI Agent Scan

At <time?> code freeze, Tech Connect runs automated AI agents against every team’s repo. Results are compiled into a per-team summary and shared with judges before demos begin at <time?>.

This category is worth 10 points and is the most objective part of the judging — it’s purely based on what the agents find (or don’t find).


The highest-weighted category. Critical issues here significantly impact your score.

  • Hardcoded secrets — API keys, passwords, or tokens committed to the repo
  • Injection vulnerabilities — SQL injection, NoSQL injection, command injection
  • Missing input validation — unvalidated user input reaching the database or business logic
  • Insecure authentication — plaintext passwords, weak hashing, missing auth checks on protected routes
  • Insecure dependencies — packages with known CVEs

Critical issues here are weighted heavily.

  • Race conditions — simultaneous operations that could leave data in an inconsistent state
  • Data consistency violations — any code path that could result in invalid state
  • Missing atomicity — multi-step operations that aren’t wrapped in a database transaction
  • Edge cases — operations performed in invalid states, duplicate operations, zero/negative values
  • Linting issues and style inconsistencies
  • Dead code (unreachable, unused variables/functions)
  • Excessive complexity (deeply nested logic, functions doing too much)
  • Inconsistent error handling
  • File structure and separation of concerns
  • Circular dependencies
  • Business logic leaking into controllers/routes
  • Database queries scattered throughout the codebase instead of in a data layer
  • Presence of tests (no tests = significant deduction)
  • Coverage of critical paths — especially core business logic
  • Quality of tests (tests that always pass regardless of implementation don’t count)
  • Edge case coverage
  • README completeness
  • API documentation presence
  • Inline comments on complex business logic
  • Outdated packages with available updates
  • Packages with known security vulnerabilities
  • Unnecessary dependencies

  • Set up linting from day one (ESLint, Pylint, etc.) and fix issues as you go
  • Use a .env.example and never commit real credentials
  • Wrap all multi-step operations in database transactions
  • Validate all user input at the API boundary
  • Write tests as you build the core logic, not after
  • Run your linter and fix any remaining issues
  • Check for any hardcoded values that should be environment variables
  • Make sure your test suite passes
  • Verify docker compose up works from a clean state

  1. Hardcoded database passwords in docker-compose.yml that are also used in production config
  2. No database transaction wrapping multi-step data operations
  3. Missing auth middleware on protected routes
  4. No input validation on critical inputs (negative values, non-numeric input)
  5. One commit at 16:58 containing the entire codebase
  6. Tests that only test the happy path — no edge cases, no error conditions