Loading...
Loading...
Responsible AI development and ethical considerations. Use when evaluating AI bias, implementing fairness measures, conducting ethical assessments, or ensuring AI systems align with human values.
npx skill4agent add 89jobrien/steve ai-ethics| Principle | Description |
|---|---|
| Fairness | AI should not discriminate against individuals or groups |
| Transparency | AI decisions should be explainable |
| Privacy | Personal data must be protected |
| Accountability | Clear responsibility for AI outcomes |
| Safety | AI should not cause harm |
| Human Agency | Humans should maintain control |
| Bias Type | Source | Example |
|---|---|---|
| Historical | Training data reflects past discrimination | Hiring models favoring male candidates |
| Representation | Underrepresented groups in training data | Face recognition failing on darker skin |
| Measurement | Proxy variables for protected attributes | ZIP code correlating with race |
| Aggregation | One model for diverse populations | Medical model trained only on one ethnicity |
| Evaluation | Biased evaluation metrics | Accuracy hiding disparate impact |
| Type | Audience | Purpose |
|---|---|---|
| Global | Developers | Understand overall model behavior |
| Local | End users | Explain specific decisions |
| Counterfactual | Affected parties | What would need to change for different outcome |
| Risk Level | Examples | Requirements |
|---|---|---|
| Unacceptable | Social scoring, manipulation | Prohibited |
| High | Healthcare, employment, credit | Strict requirements |
| Limited | Chatbots | Transparency obligations |
| Minimal | Spam filters | No requirements |
| Pattern | Use Case | Example |
|---|---|---|
| Human-in-the-Loop | High-stakes decisions | Medical diagnosis confirmation |
| Human-on-the-Loop | Monitoring with intervention | Content moderation escalation |
| Human-out-of-Loop | Low-risk, high-volume | Spam filtering |
references/bias_assessment.mdreferences/regulatory_compliance.md