Final Projects
The final project is a team-based research project where you'll explore an open problem in trustworthy machine learning.
Project Overview
Team Size: 3-4 students
Timeline: 8 weeks (Weeks 8-15)
Weight: 30% of final grade
Project Components
Component | Weight | Due Date |
---|---|---|
Proposal | 5% | Week 11 |
Progress Report | 5% | Week 13 |
Final Presentation | 10% | Week 15 |
Final Report | 10% | Finals Week |
Project Guidelines
Scope and Topics
Your project should address a research question in one or more areas of trustworthy ML:
- Fairness & Bias: Novel detection methods, mitigation algorithms, evaluation metrics
- Robustness: New attack techniques, defense mechanisms, certified methods
- Interpretability: Explanation methods, evaluation frameworks, human studies
- Privacy: Differential privacy mechanisms, federated learning innovations
- Safety & Alignment: Value learning, reward modeling, verification methods
Requirements
- Novelty: Propose new methods or provide new insights
- Implementation: Working code with experiments
- Evaluation: Rigorous experimental validation
- Writing: Clear technical exposition
Deliverables
Project Proposal (2 pages)
Due: Week 11
Required Sections: 1. Problem Statement: What challenge are you addressing? 2. Related Work: Brief survey of relevant papers (5-10 papers) 3. Approach: Your proposed method or analysis 4. Evaluation Plan: Datasets, metrics, baselines 5. Timeline: Milestones for remaining weeks
Progress Report (1 page)
Due: Week 13
Required Sections: 1. Progress Summary: What you've completed 2. Preliminary Results: Initial findings or implementation 3. Challenges: Issues encountered and solutions 4. Updated Timeline: Revised plan for final weeks
Final Presentation (15 minutes)
Date: Week 15
Presentation Structure:
- Problem motivation (2-3 min)
- Technical approach (5-6 min)
- Experimental results (4-5 min)
- Conclusions and future work (2-3 min)
- Q&A (5 min)
Final Report (8-10 pages)
Due: Finals Week
Required Sections: 1. Abstract: Problem, approach, key findings 2. Introduction: Motivation and problem statement 3. Related Work: Comprehensive literature review 4. Method: Detailed technical description 5. Experiments: Setup, results, analysis 6. Discussion: Limitations, implications, future work 7. Conclusion: Summary of contributions
Project Ideas
Fairness Projects
- Intersectional Fairness: Methods for multi-attribute fairness
- Dynamic Fairness: Fairness that adapts over time
- Fairness in NLP: Bias detection in language models
- Causal Fairness: Using causal inference for fair decisions
Robustness Projects
- Universal Adversarial Perturbations: Domain-specific attacks
- Certified Defense: Improving scalability of verification
- Real-World Robustness: Robustness to natural distribution shifts
- Robust Federated Learning: Security in distributed training
Interpretability Projects
- Counterfactual Explanations: Generating actionable explanations
- Explanation Evaluation: New metrics for explanation quality
- Interactive Explanations: Human-in-the-loop explanation systems
- Interpretable Deep Learning: Inherently interpretable architectures
Privacy Projects
- Local Differential Privacy: Privacy without trusted curator
- Private Representation Learning: Privacy-preserving embeddings
- Membership Inference Defense: Protecting against privacy attacks
- Federated Learning Privacy: Novel privacy-utility trade-offs
Evaluation Criteria
Technical Quality (40%)
- Correctness: Sound methodology and implementation
- Novelty: Original insights or approaches
- Rigor: Thorough experimental validation
- Reproducibility: Clear implementation details
Presentation (30%)
- Clarity: Clear problem statement and solution
- Organization: Logical flow and structure
- Delivery: Effective oral presentation skills
- Visual Design: Quality figures and slides
Writing (30%)
- Technical Writing: Clear, precise exposition
- Related Work: Comprehensive literature coverage
- Analysis: Thoughtful discussion of results
- Formatting: Professional presentation
Resources
Datasets
- Fairness: Adult, COMPAS, CelebA, Folktables
- Robustness: CIFAR-10/100, ImageNet, MNIST
- Privacy: See federated learning benchmarks
- General: Papers with Datasets collections
Computing Resources
- Google Colab Pro: For small-scale experiments
- Course Cluster: For larger computational needs
- Cloud Credits: Limited AWS/Azure credits available
Collaboration Tools
- GitHub: Version control and collaboration
- Overleaf: Collaborative LaTeX writing
- Slack: Team communication
- Office Hours: Weekly project consultations
Timeline
Week 8-10: Team Formation & Topic Selection
- Form teams and explore project ideas
- Read relevant papers and identify gaps
- Discuss ideas during office hours
Week 11: Proposal Submission
- Submit 2-page project proposal
- Receive feedback from instructors
Week 12-13: Implementation & Experiments
- Implement core methodology
- Run initial experiments
- Submit progress report
Week 14: Final Push
- Complete experiments and analysis
- Prepare presentation slides
- Draft final report
Week 15: Presentations
- Present findings to class
- Provide peer feedback
- Finalize written report
Past Project Examples
Successful Projects (Previous Years)
- "Fairness-Aware Multi-Task Learning" - Novel algorithm with theoretical analysis
- "Robust Vision Transformers" - Comprehensive robustness evaluation
- "Explaining Neural Recommendation Systems" - Human evaluation study
- "Privacy-Preserving Graph Neural Networks" - DP mechanisms for graph data
For questions about projects, contact the teaching team during office hours or via email.