Skip to content

docs: rewrite working with evaluations page#94

Draft
s-adamantine wants to merge 1 commit intomainfrom
fix/working-with-evaluations
Draft

docs: rewrite working with evaluations page#94
s-adamantine wants to merge 1 commit intomainfrom
fix/working-with-evaluations

Conversation

@s-adamantine
Copy link
Contributor

@s-adamantine s-adamantine commented Mar 6, 2026

Summary

  • Removed duplicate createRecord code blocks (pattern already on quickstart)
  • Fixed NSIDs: org.hypercerts.claim.evaluation/measurementorg.hypercerts.context.*
  • Fixed misleading "append-only" callout — clarified it's ATProto's ownership model, not immutability
  • Tightened trust and reputation section
  • Kept evaluation patterns (expert, community, automated)

87 → 34 lines.

Fixes https://linear.app/hypercerts/issue/HYPER-149/working-with-evaluations-page

Summary by CodeRabbit

  • Documentation
    • Simplified the getting-started guide for evaluations with clearer, more concise explanations.
    • Consolidated measurement guidance and removed redundant code examples for easier understanding.
    • Standardized evaluation patterns formatting for consistency.
    • Updated trust and reputation concepts to emphasize portable reputation records.

@vercel
Copy link
Contributor

vercel bot commented Mar 6, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
hypercerts-atproto-documentation Ready Ready Preview, Comment Mar 6, 2026 4:02am
hypercerts-v0.2-documentation Ready Ready Preview, Comment Mar 6, 2026 4:02am

Request Review

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

📝 Walkthrough

Walkthrough

Documentation file updates evaluation guidance for hypercerts platform. Removes code examples and embedded claims references, replaces creation workflow with measurements section, streamlines pattern descriptions with consistent punctuation, rewrites trust section with dash-emphasized portable reputation, and updates callout with ATProto-specific user control language.

Changes

Cohort / File(s) Summary
Documentation Content Updates
pages/getting-started/working-with-evaluations.md
Removed evaluation creation code example and AtpAgent login details. Replaced "Create an evaluation" block with "Measurements" section header. Simplified measurements guidance and removed detailed code samples. Updated "Evaluation patterns" bullets with consistent punctuation and shorter phrasing. Rewrote trust and reputation paragraph to emphasize portable reputation. Changed callout note from generic append-only description to ATProto-specific guidance on user control and multi-perspective records.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~5 minutes

Poem

A docs update hops through with grace,
Code examples trimmed from their place,
ATProto guidance takes the stage,
Measurements and trust turn the page, 🐰✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'docs: rewrite working with evaluations page' directly and accurately summarizes the main change: a comprehensive rewrite of the working-with-evaluations documentation page.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/working-with-evaluations

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pages/getting-started/working-with-evaluations.md`:
- Around line 10-14: The NSIDs used here (org.hypercerts.context.evaluation and
org.hypercerts.context.measurement) were changed in the markdown but not updated
across the lexicon files and related docs; update the corresponding lexicon
entries in the lexicons directory to match these NSIDs, and then
search-and-replace all occurrences of those lexicon-type references in the
getting-started docs (including the Quickstart page and other pages under
getting-started) so the markdown and lexicon schema remain in sync.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c218c4cf-a1e5-45b6-943e-fa66bcdb4fbb

📥 Commits

Reviewing files that changed from the base of the PR and between 78fcae9 and 2c34086.

📒 Files selected for processing (1)
  • pages/getting-started/working-with-evaluations.md

Comment on lines +10 to +14
An evaluation references the claim it assesses via a strong reference (AT-URI + CID), includes the evaluator's DID, a summary, and optionally a numeric score and linked measurements. The collection is `org.hypercerts.context.evaluation`. Creating one follows the same `createRecord` pattern shown in the [Quickstart](/getting-started/quickstart).

```typescript
import { AtpAgent } from "@atproto/api";
## Measurements

const agent = new AtpAgent({ service: "https://bsky.social" });
await agent.login({
identifier: "evaluator.certified.app",
password: "your-app-password",
});
Measurements provide quantitative data that can support an evaluation. A measurement records what was measured (the metric), the unit, the value, and optionally the methodology and evidence URIs. The collection is `org.hypercerts.context.measurement`.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep the new org.hypercerts.context.* NSIDs in sync with the lexicon refresh.

These collection names were updated here, but this repo’s docs guidance is to change lexicon-type references in pages/getting-started/*.md together with the corresponding files under lexicons/ and related pages like Quickstart. Otherwise this page can drift from the published schema/docs set.

Based on learnings: "In pages/getting-started/*.md, update references to lexicon types ... in sync with refreshed lexicon files in the lexicons/ directory. Do not update quickstart pages independently; ensure all occurrences referencing lexicon types are refreshed together with the corresponding lexicon files."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pages/getting-started/working-with-evaluations.md` around lines 10 - 14, The
NSIDs used here (org.hypercerts.context.evaluation and
org.hypercerts.context.measurement) were changed in the markdown but not updated
across the lexicon files and related docs; update the corresponding lexicon
entries in the lexicons directory to match these NSIDs, and then
search-and-replace all occurrences of those lexicon-type references in the
getting-started docs (including the Quickstart page and other pages under
getting-started) so the markdown and lexicon schema remain in sync.

@s-adamantine s-adamantine marked this pull request as draft March 6, 2026 04:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant