docs: rewrite working with evaluations page#94
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughDocumentation file updates evaluation guidance for hypercerts platform. Removes code examples and embedded claims references, replaces creation workflow with measurements section, streamlines pattern descriptions with consistent punctuation, rewrites trust section with dash-emphasized portable reputation, and updates callout with ATProto-specific user control language. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~5 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pages/getting-started/working-with-evaluations.md`:
- Around line 10-14: The NSIDs used here (org.hypercerts.context.evaluation and
org.hypercerts.context.measurement) were changed in the markdown but not updated
across the lexicon files and related docs; update the corresponding lexicon
entries in the lexicons directory to match these NSIDs, and then
search-and-replace all occurrences of those lexicon-type references in the
getting-started docs (including the Quickstart page and other pages under
getting-started) so the markdown and lexicon schema remain in sync.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c218c4cf-a1e5-45b6-943e-fa66bcdb4fbb
📒 Files selected for processing (1)
pages/getting-started/working-with-evaluations.md
| An evaluation references the claim it assesses via a strong reference (AT-URI + CID), includes the evaluator's DID, a summary, and optionally a numeric score and linked measurements. The collection is `org.hypercerts.context.evaluation`. Creating one follows the same `createRecord` pattern shown in the [Quickstart](/getting-started/quickstart). | ||
|
|
||
| ```typescript | ||
| import { AtpAgent } from "@atproto/api"; | ||
| ## Measurements | ||
|
|
||
| const agent = new AtpAgent({ service: "https://bsky.social" }); | ||
| await agent.login({ | ||
| identifier: "evaluator.certified.app", | ||
| password: "your-app-password", | ||
| }); | ||
| Measurements provide quantitative data that can support an evaluation. A measurement records what was measured (the metric), the unit, the value, and optionally the methodology and evidence URIs. The collection is `org.hypercerts.context.measurement`. |
There was a problem hiding this comment.
Keep the new org.hypercerts.context.* NSIDs in sync with the lexicon refresh.
These collection names were updated here, but this repo’s docs guidance is to change lexicon-type references in pages/getting-started/*.md together with the corresponding files under lexicons/ and related pages like Quickstart. Otherwise this page can drift from the published schema/docs set.
Based on learnings: "In pages/getting-started/*.md, update references to lexicon types ... in sync with refreshed lexicon files in the lexicons/ directory. Do not update quickstart pages independently; ensure all occurrences referencing lexicon types are refreshed together with the corresponding lexicon files."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pages/getting-started/working-with-evaluations.md` around lines 10 - 14, The
NSIDs used here (org.hypercerts.context.evaluation and
org.hypercerts.context.measurement) were changed in the markdown but not updated
across the lexicon files and related docs; update the corresponding lexicon
entries in the lexicons directory to match these NSIDs, and then
search-and-replace all occurrences of those lexicon-type references in the
getting-started docs (including the Quickstart page and other pages under
getting-started) so the markdown and lexicon schema remain in sync.
Summary
createRecordcode blocks (pattern already on quickstart)org.hypercerts.claim.evaluation/measurement→org.hypercerts.context.*87 → 34 lines.
Fixes https://linear.app/hypercerts/issue/HYPER-149/working-with-evaluations-page
Summary by CodeRabbit