chore(ci): upgrade link-checker to v1.5.0 and improve reporting (#6768)

* chore(ci): upgrade link-checker to v1.3.1 and improve reporting

- Update workflow to use link-checker v1.3.1
- Fix heredoc bug that prevented template variable evaluation
- Improve broken link reporting with:
  - Severity indicators (error vs warning)
  - Table format for better readability
  - Content file path mapping for easier fixing
  - Collapsible troubleshooting tips
- Add fallback parsing for different JSON output structures
- Update config file comments to reference v1.3.1

https://claude.ai/code/session_015JNkhFiwJnoAxJLBP7AiyZ

* fix(ci): fix sync-link-checker-binary workflow

- Add missing checkout step (gh release requires a git repo)
- Use DOCS_TOOLING_TOKEN secret for cross-repo private access
- Use GitHub API to fetch release info instead of direct URL
- Add binary size validation to catch failed downloads
- Handle optional checksums.txt gracefully

* fix(ci): align workflow JSON parsing with link-checker v1.3.1 output

Tested link-checker v1.3.1 locally and discovered the actual JSON
structure differs from what the workflow assumed:
- summary.error_count (not broken_count)
- errors[]/warnings[]/info[] arrays (not files[] or broken_links[])
- Each entry: {url, status, error, file, line, severity}

Changes:
- Fix summary field names to match v1.3.1 JSON schema
- Parse .errors[] and .warnings[] arrays correctly
- Show warnings in collapsible section (don't fail CI)
- Map public/ paths to content/ paths for GitHub annotations
- Handle missing line numbers gracefully
- Cap warnings display at 20 with note about artifacts

* fix: broken links from issues #6682, #6461

- Fix wrong path /influxdb/v2/upgrade/v2-beta-to-v2/ →
  /influxdb/v2/install/upgrade/v2-beta-to-v2/ (#6682)
- Fix fragment mismatch: #disable-the-internal-database →
  #disable-the-_internal-database (Hugo renders backtick `_internal`
  with underscore in anchor) (#6461)
- Fix dead links to /flux/v0/stdlib/universe/from →
  /flux/v0/stdlib/influxdata/influxdb/from/ (from() moved from
  universe to influxdata/influxdb package) (#6682)

closes #6682, closes #6461

* Update content/influxdb/v1/flux/get-started/_index.md

Force error to test workflow

* fix(ci): reclassify file-not-found warnings as errors in link checker

link-checker v1.3.1 classifies missing local files as warnings because
they have no HTTP status code and don't match error_codes=[404,410].
This meant broken internal links (like /flux/v0/stdlib/influxdata/influxdbfrom/)
were reported as warnings but didn't fail CI.

Fix by post-processing link-check-results.json to move "Cannot find file"
entries from warnings to errors before evaluating the check result.

Also reverts the intentional test bug in influxdb/v1/flux/get-started/_index.md.

* test(ci): intentionally broken fragment to test link-checker detection

Introduces broken heading anchor #tools-for-working-with-fluxxx (extra x)
to verify whether link-checker/lychee validates fragment anchors on local
files. Also documents the known gap: lychee doesn't validate fragments on
local file URLs (it checks file existence but doesn't open index.html to
verify heading anchors exist).

This commit should be reverted after testing.

* fix(ci): revert test fragment, document fragment checking limitation

Reverts the intentional broken fragment test. Workflow confirmed that
lychee does not validate heading anchors on local file URLs — it checks
file/directory existence but doesn't open index.html to verify fragments.

The ^file://.*# exclusion must remain to prevent false positives from
Hugo pretty URL resolution (lychee resolves /page#frag as file:///page
instead of file:///page/index.html).

Updated config comments to document this known gap clearly.

* chore(ci): update link-checker to v1.4.0

* Update link-checker to v1.5.0 in PR workflow

* fix(platform): broken link

---------

Co-authored-by: Claude <noreply@anthropic.com>
pull/6782/head^2
Jason Stirnaman 2026-01-30 15:03:13 -06:00 committed by GitHub
parent 39508ff6f1
commit 6a303c348f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
12 changed files with 248 additions and 104 deletions

View File

@ -1,5 +1,5 @@
# Lychee link checker configuration
# Updated for link-checker v1.3.0 with severity-based classification
# Updated for link-checker v1.3.1 with severity-based classification
#
# With severity levels, we no longer need to exclude sites that return:
# - 403/401/429 (classified as "info" - shown but don't fail CI)

View File

@ -1,5 +1,5 @@
# Production Link Checker Configuration for InfluxData docs-v2
# Updated for link-checker v1.3.0 with severity-based classification
# Updated for link-checker v1.3.1 with severity-based classification
#
# With severity levels, we no longer need to exclude sites that return:
# - 403/401/429 (classified as "info" - shown but don't fail CI)
@ -51,10 +51,12 @@ exclude = [
# TODO: Remove after fixing canonical URL generation or link-checker domain replacement
"^https://docs\\.influxdata\\.com/",
# Local file URLs with fragments (workaround for link-checker Hugo pretty URL bug)
# link-checker converts /path/to/page#fragment to file:///path/to/page#fragment
# but the actual file is at /path/to/page/index.html, causing false fragment errors
# TODO: Remove after fixing link-checker to handle Hugo pretty URLs with fragments
# Local file URLs with fragments — lychee resolves /path/to/page#fragment to
# file:///path/to/page#fragment, but the actual file is at /path/to/page/index.html.
# This causes false "Cannot find file" errors for valid pages with Hugo pretty URLs.
# NOTE: This also means lychee CANNOT validate fragments on local files.
# Fragment validation for internal links is a known gap (lychee doesn't open
# index.html to check heading anchors).
"^file://.*#",
# Common documentation placeholders
@ -85,6 +87,8 @@ warning_codes = [500, 502, 503, 504]
info_codes = [401, 403, 429]
# Set to true to treat warnings as errors (stricter validation)
# NOTE: Missing local files (file-not-found) have no HTTP status code and
# default to "warning" severity. The workflow reclassifies these as errors.
strict = false
[ci]
@ -108,6 +112,8 @@ max_execution_time_minutes = 10
[reporting]
# Report configuration
# NOTE: lychee's --include-fragments does not validate fragments on local file
# URLs. It only works for HTTP responses. Set to false to avoid confusion.
include_fragments = false
verbose = false
no_progress = true # Disable progress bar in CI

View File

@ -12,18 +12,18 @@ jobs:
link-check:
name: Check links in affected files
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Detect content changes
id: detect
run: |
echo "🔍 Detecting changes between ${{ github.base_ref }} and ${{ github.sha }}"
# For PRs, use the GitHub Files API to get changed files
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
echo "Using GitHub API to detect PR changes..."
@ -34,23 +34,23 @@ jobs:
echo "Using git diff to detect changes..."
git diff --name-only ${{ github.event.before }}..${{ github.sha }} > all_changed_files.txt
fi
# Filter for content markdown files
CHANGED_FILES=$(grep '^content/.*\.md$' all_changed_files.txt || true)
echo "📁 All changed files:"
cat all_changed_files.txt
echo ""
echo "📝 Content markdown files:"
echo "$CHANGED_FILES"
if [[ -n "$CHANGED_FILES" ]]; then
echo "✅ Found $(echo "$CHANGED_FILES" | wc -l) changed content file(s)"
echo "has-changes=true" >> $GITHUB_OUTPUT
echo "changed-content<<EOF" >> $GITHUB_OUTPUT
echo "$CHANGED_FILES" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
# Check if any shared content files were modified
SHARED_CHANGES=$(echo "$CHANGED_FILES" | grep '^content/shared/' || true)
if [[ -n "$SHARED_CHANGES" ]]; then
@ -64,57 +64,57 @@ jobs:
echo "has-changes=false" >> $GITHUB_OUTPUT
echo "has-shared-content=false" >> $GITHUB_OUTPUT
fi
- name: Skip if no content changes
if: steps.detect.outputs.has-changes == 'false'
run: |
echo "No content changes detected in this PR - skipping link check"
echo "✅ **No content changes detected** - link check skipped" >> $GITHUB_STEP_SUMMARY
- name: Setup Node.js
if: steps.detect.outputs.has-changes == 'true'
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'yarn'
- name: Install dependencies
if: steps.detect.outputs.has-changes == 'true'
run: yarn install --frozen-lockfile
- name: Build Hugo site
if: steps.detect.outputs.has-changes == 'true'
run: npx hugo --minify
- name: Download link-checker binary
if: steps.detect.outputs.has-changes == 'true'
run: |
echo "Downloading link-checker binary from docs-v2 releases..."
# Download from docs-v2's own releases (always accessible)
curl -L -H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
-o link-checker-info.json \
"https://api.github.com/repos/influxdata/docs-v2/releases/tags/link-checker-v1.3.0"
"https://api.github.com/repos/influxdata/docs-v2/releases/tags/link-checker-v1.5.0"
# Extract download URL for linux binary
DOWNLOAD_URL=$(jq -r '.assets[] | select(.name | test("link-checker.*linux")) | .url' link-checker-info.json)
if [[ "$DOWNLOAD_URL" == "null" || -z "$DOWNLOAD_URL" ]]; then
echo "❌ No linux binary found in release"
echo "Available assets:"
jq -r '.assets[].name' link-checker-info.json
exit 1
fi
echo "📥 Downloading: $DOWNLOAD_URL"
curl -L -H "Accept: application/octet-stream" \
-H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
-o link-checker "$DOWNLOAD_URL"
chmod +x link-checker
./link-checker --version
- name: Verify link checker config exists
if: steps.detect.outputs.has-changes == 'true'
run: |
@ -124,26 +124,26 @@ jobs:
exit 1
fi
echo "✅ Using configuration: .ci/link-checker/production.lycherc.toml"
- name: Map changed content to public files
if: steps.detect.outputs.has-changes == 'true'
id: mapping
run: |
echo "Mapping changed content files to public HTML files..."
# Create temporary file with changed content files
echo "${{ steps.detect.outputs.changed-content }}" > changed-files.txt
# Map content files to public files
PUBLIC_FILES=$(cat changed-files.txt | xargs -r ./link-checker map --existing-only)
if [[ -n "$PUBLIC_FILES" ]]; then
echo "Found affected public files:"
echo "$PUBLIC_FILES"
echo "public-files<<EOF" >> $GITHUB_OUTPUT
echo "$PUBLIC_FILES" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
# Count files for summary
FILE_COUNT=$(echo "$PUBLIC_FILES" | wc -l)
echo "file-count=$FILE_COUNT" >> $GITHUB_OUTPUT
@ -152,83 +152,174 @@ jobs:
echo "public-files=" >> $GITHUB_OUTPUT
echo "file-count=0" >> $GITHUB_OUTPUT
fi
- name: Run link checker
if: steps.detect.outputs.has-changes == 'true' && steps.mapping.outputs.public-files != ''
id: link-check
run: |
echo "Checking links in ${{ steps.mapping.outputs.file-count }} affected files..."
# Create temporary file with public files list
echo "${{ steps.mapping.outputs.public-files }}" > public-files.txt
# Run link checker with detailed JSON output
set +e # Don't fail immediately on error
cat public-files.txt | xargs -r ./link-checker check \
--config .ci/link-checker/production.lycherc.toml \
--format json \
--output link-check-results.json
EXIT_CODE=$?
if [[ -f link-check-results.json ]]; then
# Parse results
BROKEN_COUNT=$(jq -r '.summary.broken_count // 0' link-check-results.json)
# Parse results using actual v1.3.1 JSON structure
ERROR_COUNT=$(jq -r '.summary.error_count // 0' link-check-results.json)
WARNING_COUNT=$(jq -r '.summary.warning_count // 0' link-check-results.json)
TOTAL_COUNT=$(jq -r '.summary.total_checked // 0' link-check-results.json)
SUCCESS_RATE=$(jq -r '.summary.success_rate // 0' link-check-results.json)
echo "broken-count=$BROKEN_COUNT" >> $GITHUB_OUTPUT
# Reclassify file-not-found warnings as errors
# link-checker classifies missing local files as warnings (no HTTP status code),
# but these represent genuinely broken internal links and should fail CI.
FILE_NOT_FOUND_COUNT=$(jq '[.warnings[] | select(.error | test("Cannot find file"))] | length' link-check-results.json 2>/dev/null || echo 0)
if [[ $FILE_NOT_FOUND_COUNT -gt 0 ]]; then
echo "⚠️ Found $FILE_NOT_FOUND_COUNT missing local file(s) — reclassifying as errors"
# Move file-not-found entries from warnings to errors
jq '
.errors += [.warnings[] | select(.error | test("Cannot find file")) | .severity = "error"]
| .warnings = [.warnings[] | select(.error | test("Cannot find file") | not)]
| .summary.error_count = (.errors | length)
| .summary.warning_count = (.warnings | length)
' link-check-results.json > link-check-results-fixed.json
mv link-check-results-fixed.json link-check-results.json
ERROR_COUNT=$(jq -r '.summary.error_count // 0' link-check-results.json)
WARNING_COUNT=$(jq -r '.summary.warning_count // 0' link-check-results.json)
fi
echo "error-count=$ERROR_COUNT" >> $GITHUB_OUTPUT
echo "warning-count=$WARNING_COUNT" >> $GITHUB_OUTPUT
echo "total-count=$TOTAL_COUNT" >> $GITHUB_OUTPUT
echo "success-rate=$SUCCESS_RATE" >> $GITHUB_OUTPUT
if [[ $BROKEN_COUNT -gt 0 ]]; then
echo "❌ Found $BROKEN_COUNT broken links out of $TOTAL_COUNT total links"
if [[ $ERROR_COUNT -gt 0 ]]; then
echo "❌ Found $ERROR_COUNT broken links out of $TOTAL_COUNT total links"
echo "check-result=failed" >> $GITHUB_OUTPUT
else
echo "✅ All $TOTAL_COUNT links are valid"
echo "✅ All $TOTAL_COUNT links are valid ($WARNING_COUNT warnings)"
echo "check-result=passed" >> $GITHUB_OUTPUT
fi
else
echo "❌ Link check failed to generate results"
echo "check-result=error" >> $GITHUB_OUTPUT
fi
exit $EXIT_CODE
- name: Process and report results
if: always() && steps.detect.outputs.has-changes == 'true' && steps.mapping.outputs.public-files != ''
env:
FILE_COUNT: ${{ steps.mapping.outputs.file-count }}
TOTAL_COUNT: ${{ steps.link-check.outputs.total-count }}
ERROR_COUNT: ${{ steps.link-check.outputs.error-count }}
WARNING_COUNT: ${{ steps.link-check.outputs.warning-count }}
SUCCESS_RATE: ${{ steps.link-check.outputs.success-rate }}
CHECK_RESULT: ${{ steps.link-check.outputs.check-result }}
run: |
if [[ -f link-check-results.json ]]; then
# Create detailed error annotations for broken links
if [[ "${{ steps.link-check.outputs.check-result }}" == "failed" ]]; then
echo "Creating error annotations for broken links..."
jq -r '.broken_links[]? |
"::error file=\(.file // "unknown"),line=\(.line // 1)::Broken link: \(.url) - \(.error // "Unknown error")"' \
link-check-results.json || true
fi
# Generate summary comment
cat >> $GITHUB_STEP_SUMMARY << 'EOF'
## Link Check Results
**Files Checked:** ${{ steps.mapping.outputs.file-count }}
**Total Links:** ${{ steps.link-check.outputs.total-count }}
**Broken Links:** ${{ steps.link-check.outputs.broken-count }}
**Success Rate:** ${{ steps.link-check.outputs.success-rate }}%
EOF
if [[ "${{ steps.link-check.outputs.check-result }}" == "failed" ]]; then
echo "❌ **Link check failed** - see annotations above for details" >> $GITHUB_STEP_SUMMARY
# Generate summary header
echo "## Link Check Results" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "| Metric | Value |" >> $GITHUB_STEP_SUMMARY
echo "|--------|-------|" >> $GITHUB_STEP_SUMMARY
echo "| Files Checked | ${FILE_COUNT} |" >> $GITHUB_STEP_SUMMARY
echo "| Total Links | ${TOTAL_COUNT} |" >> $GITHUB_STEP_SUMMARY
echo "| Errors | ${ERROR_COUNT} |" >> $GITHUB_STEP_SUMMARY
echo "| Warnings | ${WARNING_COUNT} |" >> $GITHUB_STEP_SUMMARY
echo "| Success Rate | ${SUCCESS_RATE}% |" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
# Report broken links (errors) with annotations
if [[ "${CHECK_RESULT}" == "failed" ]]; then
echo "### Broken Links" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "| Severity | Source File | Broken URL | Error |" >> $GITHUB_STEP_SUMMARY
echo "|----------|------------|------------|-------|" >> $GITHUB_STEP_SUMMARY
# Process errors (these fail CI)
jq -c '.errors[]?' link-check-results.json 2>/dev/null | while read -r entry; do
URL=$(echo "$entry" | jq -r '.url // "unknown"')
ERROR=$(echo "$entry" | jq -r '.error // "Unknown error"')
FILE=$(echo "$entry" | jq -r '.file // "unknown"')
LINE=$(echo "$entry" | jq -r '.line // empty')
# Map public path to content path for annotations
CONTENT_FILE=$(echo "$FILE" | sed 's|.*/public/|content/|' | sed 's|/index\.html$|/_index.md|')
# Create GitHub annotation
if [[ -n "$LINE" && "$LINE" != "null" ]]; then
echo "::error file=${CONTENT_FILE},line=${LINE}::Broken link: ${URL} (${ERROR})"
else
echo "::error file=${CONTENT_FILE}::Broken link: ${URL} (${ERROR})"
fi
# Add row to summary table
SAFE_URL=$(echo "$URL" | sed 's/|/\\|/g')
SAFE_ERROR=$(echo "$ERROR" | sed 's/|/\\|/g' | cut -c1-80)
echo "| 🔴 error | \`${CONTENT_FILE}\` | ${SAFE_URL} | ${SAFE_ERROR} |" >> $GITHUB_STEP_SUMMARY
done
echo "" >> $GITHUB_STEP_SUMMARY
echo "---" >> $GITHUB_STEP_SUMMARY
echo "❌ **Link check failed** — fix the broken links listed above before merging." >> $GITHUB_STEP_SUMMARY
else
echo "✅ **All links are valid**" >> $GITHUB_STEP_SUMMARY
fi
# Report warnings (don't fail CI, but useful context)
WARNING_ARRAY_LEN=$(jq '.warnings | length' link-check-results.json 2>/dev/null || echo 0)
if [[ "$WARNING_ARRAY_LEN" -gt 0 ]]; then
echo "" >> $GITHUB_STEP_SUMMARY
echo "<details>" >> $GITHUB_STEP_SUMMARY
echo "<summary>⚠️ ${WARNING_ARRAY_LEN} warning(s) (do not fail CI)</summary>" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "| Source File | URL | Issue |" >> $GITHUB_STEP_SUMMARY
echo "|------------|-----|-------|" >> $GITHUB_STEP_SUMMARY
jq -c '.warnings[]?' link-check-results.json 2>/dev/null | head -20 | while read -r entry; do
URL=$(echo "$entry" | jq -r '.url // "unknown"')
ERROR=$(echo "$entry" | jq -r '.error // "Unknown"')
FILE=$(echo "$entry" | jq -r '.file // "unknown"')
CONTENT_FILE=$(echo "$FILE" | sed 's|.*/public/|content/|' | sed 's|/index\.html$|/_index.md|')
SAFE_URL=$(echo "$URL" | sed 's/|/\\|/g')
SAFE_ERROR=$(echo "$ERROR" | sed 's/|/\\|/g' | cut -c1-80)
echo "| \`${CONTENT_FILE}\` | ${SAFE_URL} | ${SAFE_ERROR} |" >> $GITHUB_STEP_SUMMARY
done
if [[ "$WARNING_ARRAY_LEN" -gt 20 ]]; then
echo "" >> $GITHUB_STEP_SUMMARY
echo "_Showing first 20 of ${WARNING_ARRAY_LEN} warnings. Download the artifact for full results._" >> $GITHUB_STEP_SUMMARY
fi
echo "" >> $GITHUB_STEP_SUMMARY
echo "</details>" >> $GITHUB_STEP_SUMMARY
fi
# Add helpful tips
echo "" >> $GITHUB_STEP_SUMMARY
echo "<details>" >> $GITHUB_STEP_SUMMARY
echo "<summary>💡 Troubleshooting Tips</summary>" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "- **404 errors**: The linked page doesn't exist. Check for typos or update the link." >> $GITHUB_STEP_SUMMARY
echo "- **Relative links**: Use relative paths starting with \`/\` for internal links." >> $GITHUB_STEP_SUMMARY
echo "- **Anchors**: Ensure heading anchors match the linked fragment exactly." >> $GITHUB_STEP_SUMMARY
echo "- **Warnings**: External sites may be temporarily unavailable — these don't fail CI." >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "</details>" >> $GITHUB_STEP_SUMMARY
else
echo "⚠️ **Link check could not complete** - no results file generated" >> $GITHUB_STEP_SUMMARY
echo "⚠️ **Link check could not complete** no results file generated" >> $GITHUB_STEP_SUMMARY
fi
- name: Upload detailed results
if: always() && steps.detect.outputs.has-changes == 'true' && steps.mapping.outputs.public-files != ''
uses: actions/upload-artifact@v4
@ -238,4 +329,4 @@ jobs:
link-check-results.json
changed-files.txt
public-files.txt
retention-days: 30
retention-days: 30

View File

@ -12,34 +12,80 @@ jobs:
sync-binary:
name: Sync link-checker binary from docs-tooling
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download binary from docs-tooling release
run: |
echo "Downloading link-checker ${{ inputs.version }} from docs-tooling..."
# Download binary from docs-tooling release
curl -L -H "Accept: application/octet-stream" \
-H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
# Download binary from docs-tooling release using the GitHub API
# NOTE: requires DOCS_TOOLING_TOKEN secret with read access to docs-tooling releases
RELEASE_INFO=$(curl -sL \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.DOCS_TOOLING_TOKEN }}" \
"https://api.github.com/repos/influxdata/docs-tooling/releases/tags/link-checker-${{ inputs.version }}")
# Check if release was found
if echo "$RELEASE_INFO" | jq -e '.message == "Not Found"' >/dev/null 2>&1; then
echo "❌ Release link-checker-${{ inputs.version }} not found in docs-tooling"
exit 1
fi
# Download linux binary asset
BINARY_URL=$(echo "$RELEASE_INFO" | jq -r '.assets[] | select(.name == "link-checker-linux-x86_64") | .url')
if [[ -z "$BINARY_URL" || "$BINARY_URL" == "null" ]]; then
echo "❌ No linux binary found in release"
echo "Available assets:"
echo "$RELEASE_INFO" | jq -r '.assets[].name'
exit 1
fi
curl -sL \
-H "Accept: application/octet-stream" \
-H "Authorization: Bearer ${{ secrets.DOCS_TOOLING_TOKEN }}" \
-o link-checker-linux-x86_64 \
"https://github.com/influxdata/docs-tooling/releases/download/link-checker-${{ inputs.version }}/link-checker-linux-x86_64"
# Download checksums
curl -L -H "Accept: application/octet-stream" \
-H "Authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" \
-o checksums.txt \
"https://github.com/influxdata/docs-tooling/releases/download/link-checker-${{ inputs.version }}/checksums.txt"
# Verify downloads
ls -la link-checker-linux-x86_64 checksums.txt
"$BINARY_URL"
# Download checksums if available
CHECKSUMS_URL=$(echo "$RELEASE_INFO" | jq -r '.assets[] | select(.name == "checksums.txt") | .url')
if [[ -n "$CHECKSUMS_URL" && "$CHECKSUMS_URL" != "null" ]]; then
curl -sL \
-H "Accept: application/octet-stream" \
-H "Authorization: Bearer ${{ secrets.DOCS_TOOLING_TOKEN }}" \
-o checksums.txt \
"$CHECKSUMS_URL"
fi
# Verify the binary is valid (not an error page)
FILE_SIZE=$(stat -c%s link-checker-linux-x86_64 2>/dev/null || stat -f%z link-checker-linux-x86_64)
if [[ "$FILE_SIZE" -lt 1000 ]]; then
echo "❌ Downloaded binary is only ${FILE_SIZE} bytes - likely a failed download"
echo "Content:"
cat link-checker-linux-x86_64
exit 1
fi
echo "✅ Downloaded binary: ${FILE_SIZE} bytes"
ls -la link-checker-linux-x86_64
- name: Create docs-v2 release
run: |
echo "Creating link-checker-${{ inputs.version }} release in docs-v2..."
# Collect assets to upload
ASSETS="link-checker-linux-x86_64"
if [[ -f checksums.txt ]]; then
ASSETS="$ASSETS checksums.txt"
fi
gh release create \
--repo "${{ github.repository }}" \
--title "Link Checker Binary ${{ inputs.version }}" \
--notes "Link validation tooling binary for docs-v2 GitHub Actions workflows.
--notes "$(cat <<NOTES
Link validation tooling binary for docs-v2 GitHub Actions workflows.
This binary is distributed from the docs-tooling repository release link-checker-${{ inputs.version }}.
@ -60,9 +106,10 @@ jobs:
### Changes in ${{ inputs.version }}
See the [docs-tooling release](https://github.com/influxdata/docs-tooling/releases/tag/link-checker-${{ inputs.version }}) for detailed changelog." \
link-checker-${{ inputs.version }} \
link-checker-linux-x86_64 \
checksums.txt
See the [docs-tooling release](https://github.com/influxdata/docs-tooling/releases/tag/link-checker-${{ inputs.version }}) for detailed changelog.
NOTES
)" \
"link-checker-${{ inputs.version }}" \
$ASSETS
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -52,7 +52,7 @@ A **bucket** is a named location where data is stored that has a retention polic
It's similar to an InfluxDB v1.x "database," but is a combination of both a database and a retention policy.
When using multiple retention policies, each retention policy is treated as is its own bucket.
Flux's [`from()` function](/flux/v0/stdlib/universe/from), which defines an InfluxDB data source, requires a `bucket` parameter.
Flux's [`from()` function](/flux/v0/stdlib/influxdata/influxdb/from/), which defines an InfluxDB data source, requires a `bucket` parameter.
When using Flux with InfluxDB v1.x, use the following bucket naming convention which combines
the database name and retention policy into a single bucket name:

View File

@ -27,7 +27,7 @@ Every Flux query needs the following:
## 1. Define your data source
Flux's [`from()`](/flux/v0/stdlib/universe/from) function defines an InfluxDB data source.
Flux's [`from()`](/flux/v0/stdlib/influxdata/influxdb/from/) function defines an InfluxDB data source.
It requires a [`bucket`](/enterprise_influxdb/v1/flux/get-started/#buckets) parameter.
For this example, use `telegraf/autogen`, a combination of the default database and retention policy provided by the TICK stack.

View File

@ -57,7 +57,7 @@ A **bucket** is a named location where data is stored that has a retention polic
It's similar to an InfluxDB v1.x "database," but is a combination of both a database and a retention policy.
When using multiple retention policies, each retention policy is treated as is its own bucket.
Flux's [`from()` function](/flux/v0/stdlib/universe/from), which defines an InfluxDB data source, requires a `bucket` parameter.
Flux's [`from()` function](/flux/v0/stdlib/influxdata/influxdb/from/), which defines an InfluxDB data source, requires a `bucket` parameter.
When using Flux with InfluxDB v1.x, use the following bucket naming convention which combines
the database name and retention policy into a single bucket name:

View File

@ -25,7 +25,7 @@ Every Flux query needs the following:
## 1. Define your data source
Flux's [`from()`](/flux/v0/stdlib/universe/from) function defines an InfluxDB data source.
Flux's [`from()`](/flux/v0/stdlib/influxdata/influxdb/from/) function defines an InfluxDB data source.
It requires a [`bucket`](/influxdb/v1/flux/get-started/#buckets) parameter.
For this example, use `telegraf/autogen`, a combination of the default database and retention policy provided by the TICK stack.

View File

@ -60,7 +60,7 @@ r = {foo: "bar", baz: "quz"}
```
## Filter by fields and tags
The combination of [`from()`](/flux/v0/stdlib/universe/from),
The combination of [`from()`](/flux/v0/stdlib/influxdata/influxdb/from/),
[`range()`](/flux/v0/stdlib/universe/range),
and `filter()` represent the most basic Flux query:

View File

@ -99,7 +99,7 @@ The `ON` clause defines the database to query.
In InfluxDB OSS {{< current-version >}}, database and retention policy combinations are mapped to specific buckets
(for more information, see [Database and retention policy mapping](/influxdb/v2/reference/api/influxdb-1x/dbrp/)).
Use the [`from()` function](/flux/v0/stdlib/universe/from)
Use the [`from()` function](/flux/v0/stdlib/influxdata/influxdb/from/)
to specify the bucket to query:
###### InfluxQL

View File

@ -20,7 +20,7 @@ Upgrade to InfluxDB {{< current-version >}} from an earlier version of InfluxDB
{{% note %}}
#### InfluxDB 2.0 beta-16 or earlier
If you're upgrading from InfluxDB 2.0 beta-16 or earlier, you must first
[upgrade to InfluxDB 2.0](/influxdb/v2/upgrade/v2-beta-to-v2/),
[upgrade to InfluxDB 2.0](/influxdb/v2/install/upgrade/v2-beta-to-v2/),
and then complete the steps below.
{{% /note %}}

View File

@ -32,7 +32,7 @@ Telegraf input plugins. To view prebuilt dashboards:
## Import monitoring dashboards
Use the dashboards below to visualize and monitor key TICK stack metrics.
Download the dashboard file and import it into Chronograf.
For detailed instructions, see [Importing a dashboard](/chronograf/v1/administration/import-export-dashboards/#importing-a-dashboard).
For detailed instructions, see [Import a dashboard](/chronograf/v1/administration/import-export-dashboards/#import-a-dashboard).
- [Monitor InfluxDB OSS](#monitor-influxdb-oss)
- [Monitor InfluxDB Enterprise](#monitor-influxdb-enterprise)
@ -44,7 +44,7 @@ Use the InfluxDB OSS Monitor dashboard to monitor InfluxDB OSS in Chronograf.
<a class="btn download" href="/downloads/influxdb-oss-monitor-dashboard.json" download target="\_blank">Download InfluxDB OSS Monitor dashboard</a>
The InfluxDB OSS Monitor dashboard uses data from the `_internal` database
_([not recommended for production](/platform/monitoring/influxdata-platform/internal-vs-external/#disable-the-internal-database-in-production-clusters))_
_([not recommended for production](/platform/monitoring/influxdata-platform/internal-vs-external/#disable-the-_internal-database-in-production-clusters))_
or collected by the [Telegraf `influxdb` input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/influxdb).
This dashboard contains the following cells:
@ -63,7 +63,7 @@ Use the InfluxDB Enterprise Monitor dashboard to monitor InfluxDB Enterprise in
<a class="btn download" href="/downloads/influxdb-enterprise-monitor-dashboard.json" download target="\_blank">Download InfluxDB Enterprise Monitor dashboard</a>
The InfluxDB Enterprise Monitor dashboard uses data from the `_internal` database
_([not recommended for production](/platform/monitoring/influxdata-platform/internal-vs-external/#disable-the-internal-database-in-production-clusters))_
_([not recommended for production](/platform/monitoring/influxdata-platform/internal-vs-external/#disable-the-_internal-database-in-production-clusters))_
or collected by the [Telegraf `influxdb` input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/influxdb).
This dashboard contains the following cells: