chore(docs): Redesign docs CLI tools for creating and editing content, add content/create.md tutorial page for the How to creat… (#6506)

* chore(docs): Add content/create.md tutorial page for the How to create your own documentation tutorial

chore(scripts): docs:create and docs:edit scripts for content creation and editing

Major improvements to docs:create UX for both Claude Code and external tool integration:

**New `docs` CLI command:**
- Add scripts/docs-cli.js - main CLI with subcommand routing
- Add bin field to package.json for `docs` command
- Usage: `docs create` and `docs edit` (cleaner than yarn commands)

**Smart piping detection:**
- Auto-detect when stdout is piped (\!process.stdout.isTTY)
- When piping: automatically output prompt text (no flag needed)
- When interactive: output prompt file path
- --print-prompt flag now optional (auto-enabled when piping)

**Specify products:**
- simplify link following behavior - treat relative paths as local files, all HTTP/HTTPS as external
- stdin now requires --products flag with product keys
- --products now accepts keys from products.yml (influxdb3_core, telegraf, etc.)

Examples:
  --products influxdb3_core
  --products influxdb3_core,influxdb3_enterprise
  --products telegraf

**Usage examples:**
  # Inside Claude Code - automatic execution
  docs create drafts/new-feature.md

  # Pipe to external AI - prompt auto-detected
  docs create FILE --products X | claude -p
  docs create FILE --products X | copilot -p

  # Pipe from stdin
  echo 'content' | docs create --products X | claude -p

Benefits:
- Cleaner syntax (no yarn --silent needed)
- No manual --print-prompt flag when piping
- Consistent with industry tools (git, npm, etc.)
- Backward compatible with yarn commands

WIP: docs:create usage examples

- Redesign of the docs CLI tools for creating and editing content
- Cleaner interface works better for piping output to agents and downstream utilities
- Updates README.md and other authoring docs

This repository includes a `docs` CLI tool for common documentation workflows:

```sh
npx docs create drafts/new-feature.md --products influxdb3_core

npx docs edit https://docs.influxdata.com/influxdb3/core/admin/

npx docs placeholders content/influxdb3/core/admin/upgrade.md

npx docs --help
```

**Run test cases:**

```sh
npx docs test
```

* Update content/create.md

* Update content/create.md

* Update content/create.md

* Update content/create.md

* Update scripts/templates/chatgpt-prompt.md

* Update DOCS-SHORTCODES.md
pull/6507/head^2
Jason Stirnaman 2025-11-03 11:18:15 -05:00 committed by GitHub
parent efd288fdb8
commit e8a48e3482
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
14 changed files with 1297 additions and 265 deletions

View File

@ -19,7 +19,7 @@ Complete reference for custom Hugo shortcodes used in InfluxData documentation.
- [Content Management](#content-management) - [Content Management](#content-management)
- [Special Purpose](#special-purpose) - [Special Purpose](#special-purpose)
--- ***
## Notes and Warnings ## Notes and Warnings
@ -146,7 +146,7 @@ Use the `{{< api-endpoint >}}` shortcode to generate a code block that contains
- **method**: HTTP request method (get, post, patch, put, or delete) - **method**: HTTP request method (get, post, patch, put, or delete)
- **endpoint**: API endpoint - **endpoint**: API endpoint
- **api-ref**: Link the endpoint to a specific place in the API documentation - **api-ref**: Link the endpoint to a specific place in the API documentation
- **influxdb_host**: Specify which InfluxDB product host to use _if the `endpoint` contains the `influxdb/host` shortcode_. Uses the current InfluxDB product as default. Supports the following product values: - **influxdb_host**: Specify which InfluxDB product host to use *if the `endpoint` contains the `influxdb/host` shortcode*. Uses the current InfluxDB product as default. Supports the following product values:
- oss - oss
- cloud - cloud
- serverless - serverless
@ -268,11 +268,11 @@ To link to tabbed content, click on the tab and use the URL parameter shown. It
Use the `{{< page-nav >}}` shortcode to add page navigation buttons to a page. These are useful for guiding users through a set of docs that should be read in sequential order. The shortcode has the following parameters: Use the `{{< page-nav >}}` shortcode to add page navigation buttons to a page. These are useful for guiding users through a set of docs that should be read in sequential order. The shortcode has the following parameters:
- **prev:** path of the previous document _(optional)_ - **prev:** path of the previous document *(optional)*
- **next:** path of the next document _(optional)_ - **next:** path of the next document *(optional)*
- **prevText:** override the button text linking to the previous document _(optional)_ - **prevText:** override the button text linking to the previous document *(optional)*
- **nextText:** override the button text linking to the next document _(optional)_ - **nextText:** override the button text linking to the next document *(optional)*
- **keepTab:** include the currently selected tab in the button link _(optional)_ - **keepTab:** include the currently selected tab in the button link *(optional)*
The shortcode generates buttons that link to both the previous and next documents. By default, the shortcode uses either the `list_title` or the `title` of the linked document, but you can use `prevText` and `nextText` to override button text. The shortcode generates buttons that link to both the previous and next documents. By default, the shortcode uses either the `list_title` or the `title` of the linked document, but you can use `prevText` and `nextText` to override button text.
@ -308,7 +308,7 @@ The children shortcode can also be used to list only "section" articles (those w
{{< children show="pages" >}} {{< children show="pages" >}}
``` ```
_By default, it displays both sections and pages._ *By default, it displays both sections and pages.*
Use the `type` argument to specify the format of the children list. Use the `type` argument to specify the format of the children list.
@ -325,7 +325,7 @@ The following list types are available:
#### Include a "Read more" link #### Include a "Read more" link
To include a "Read more" link with each child summary, set `readmore=true`. _Only the `articles` list type supports "Read more" links._ To include a "Read more" link with each child summary, set `readmore=true`. *Only the `articles` list type supports "Read more" links.*
```md ```md
{{< children readmore=true >}} {{< children readmore=true >}}
@ -333,7 +333,7 @@ To include a "Read more" link with each child summary, set `readmore=true`. _Onl
#### Include a horizontal rule #### Include a horizontal rule
To include a horizontal rule after each child summary, set `hr=true`. _Only the `articles` list type supports horizontal rules._ To include a horizontal rule after each child summary, set `hr=true`. *Only the `articles` list type supports horizontal rules.*
```md ```md
{{< children hr=true >}} {{< children hr=true >}}
@ -390,11 +390,11 @@ This is useful for maintaining and referencing sample code variants in their nat
#### Include specific files from the same directory #### Include specific files from the same directory
> [!Caution] > \[!Caution]
> **Don't use for code examples** > **Don't use for code examples**
> Using this and `get-shared-text` shortcodes to include code examples prevents the code from being tested. > Using this and `get-shared-text` shortcodes to include code examples prevents the code from being tested.
To include the text from one file in another file in the same directory, use the `{{< get-leaf-text >}}` shortcode. The directory that contains both files must be a Hugo [_Leaf Bundle_](https://gohugo.io/content-management/page-bundles/#leaf-bundles), a directory that doesn't have any child directories. To include the text from one file in another file in the same directory, use the `{{< get-leaf-text >}}` shortcode. The directory that contains both files must be a Hugo [*Leaf Bundle*](https://gohugo.io/content-management/page-bundles/#leaf-bundles), a directory that doesn't have any child directories.
In the following example, `api` is a leaf bundle. `content` isn't. In the following example, `api` is a leaf bundle. `content` isn't.
@ -695,7 +695,7 @@ Column 2
The following options are available: The following options are available:
- half _(Default)_ - half *(Default)*
- third - third
- quarter - quarter
@ -721,10 +721,10 @@ Click {{< caps >}}Add Data{{< /caps >}}
### Authentication token link ### Authentication token link
Use the `{{% token-link "<descriptor>" "<link_append>%}}` shortcode to automatically generate links to token management documentation. The shortcode accepts two _optional_ arguments: Use the `{{% token-link "<descriptor>" "<link_append>%}}` shortcode to automatically generate links to token management documentation. The shortcode accepts two *optional* arguments:
- **descriptor**: An optional token descriptor - **descriptor**: An optional token descriptor
- **link_append**: An optional path to append to the token management link path, `/<product>/<version>/admin/tokens/`. - **link\_append**: An optional path to append to the token management link path, `/<product>/<version>/admin/tokens/`.
```md ```md
{{% token-link "database" "resource/" %}} {{% token-link "database" "resource/" %}}
@ -775,7 +775,7 @@ Descriptions should follow consistent patterns:
- Recommended: "your {{% token-link "database" %}}"{{% show-in "enterprise" %}} with permissions on the specified database{{% /show-in %}} - Recommended: "your {{% token-link "database" %}}"{{% show-in "enterprise" %}} with permissions on the specified database{{% /show-in %}}
- Avoid: "your token", "the token", "an authorization token" - Avoid: "your token", "the token", "an authorization token"
3. **Database names**: 3. **Database names**:
- Recommended: "the name of the database to [action]" - Recommended: "the name of the database to \[action]"
- Avoid: "your database", "the database name" - Avoid: "your database", "the database name"
4. **Conditional content**: 4. **Conditional content**:
- Use `{{% show-in "enterprise" %}}` for content specific to enterprise versions - Use `{{% show-in "enterprise" %}}` for content specific to enterprise versions
@ -801,9 +801,71 @@ Descriptions should follow consistent patterns:
- `{{% code-placeholder-key %}}`: Use this shortcode to define a placeholder key - `{{% code-placeholder-key %}}`: Use this shortcode to define a placeholder key
- `{{% /code-placeholder-key %}}`: Use this shortcode to close the key name - `{{% /code-placeholder-key %}}`: Use this shortcode to close the key name
_The `placeholders` attribute supercedes the deprecated `code-placeholders` shortcode._ *The `placeholders` attribute supercedes the deprecated `code-placeholders` shortcode.*
#### Example usage #### Automated placeholder syntax
Use the `docs placeholders` command to automatically add placeholder syntax to code blocks and descriptions:
```bash
# Process a file
npx docs placeholders content/influxdb3/core/admin/upgrade.md
# Preview changes without modifying the file
npx docs placeholders content/influxdb3/core/admin/upgrade.md --dry
# Get help
npx docs placeholders --help
```
**What it does:**
1. Detects UPPERCASE placeholders in code blocks
2. Adds `{ placeholders="..." }` attribute to code fences
3. Wraps placeholder descriptions with `{{% code-placeholder-key %}}` shortcodes
**Example transformation:**
Before:
````markdown
```bash
influxdb3 query \
--database SYSTEM_DATABASE \
--token ADMIN_TOKEN \
"SELECT * FROM system.version"
```
Replace the following:
- **`SYSTEM_DATABASE`**: The name of your system database
- **`ADMIN_TOKEN`**: An admin token with read permissions
````
After:
````markdown
```bash { placeholders="ADMIN_TOKEN|SYSTEM_DATABASE" }
influxdb3 query \
--database SYSTEM_DATABASE \
--token ADMIN_TOKEN \
"SELECT * FROM system.version"
```
Replace the following:
- {{% code-placeholder-key %}}`SYSTEM_DATABASE`{{% /code-placeholder-key %}}: The name of your system database
- {{% code-placeholder-key %}}`ADMIN_TOKEN`{{% /code-placeholder-key %}}: An admin token with read permissions
````
**How it works:**
- Pattern: Matches words with 2+ characters, all uppercase, can include underscores
- Excludes common words: HTTP verbs (GET, POST), protocols (HTTP, HTTPS), SQL keywords (SELECT, FROM), etc.
- Idempotent: Running multiple times won't duplicate syntax
- Preserves existing `placeholders` attributes and already-wrapped descriptions
#### Manual placeholder usage
```sh { placeholders "DATABASE_NAME|USERNAME|PASSWORD_OR_TOKEN|API_TOKEN|exampleuser@influxdata.com" } ```sh { placeholders "DATABASE_NAME|USERNAME|PASSWORD_OR_TOKEN|API_TOKEN|exampleuser@influxdata.com" }
curl --request POST http://localhost:8086/write?db=DATABASE_NAME \ curl --request POST http://localhost:8086/write?db=DATABASE_NAME \
@ -839,7 +901,7 @@ Sample dataset to output. Use either `set` argument name or provide the set as t
#### includeNull #### includeNull
Specify whether or not to include _null_ values in the dataset. Use either `includeNull` argument name or provide the boolean value as the second argument. Specify whether or not to include *null* values in the dataset. Use either `includeNull` argument name or provide the boolean value as the second argument.
#### includeRange #### includeRange
@ -1115,6 +1177,6 @@ The InfluxDB host placeholder that gets replaced by custom domains differs betwe
{{< influxdb/host "serverless" >}} {{< influxdb/host "serverless" >}}
``` ```
--- ***
**For working examples**: Test all shortcodes in [content/example.md](content/example.md) **For working examples**: Test all shortcodes in [content/example.md](content/example.md)

View File

@ -2,9 +2,9 @@
<img src="/static/img/influx-logo-cubo-dark.png" width="200"> <img src="/static/img/influx-logo-cubo-dark.png" width="200">
</p> </p>
# InfluxDB 2.0 Documentation # InfluxData Product Documentation
This repository contains the InfluxDB 2.x documentation published at [docs.influxdata.com](https://docs.influxdata.com). This repository contains the InfluxData product documentation for InfluxDB and related tooling published at [docs.influxdata.com](https://docs.influxdata.com).
## Contributing ## Contributing
@ -15,6 +15,26 @@ For information about contributing to the InfluxData documentation, see [Contrib
For information about testing the documentation, including code block testing, link validation, and style linting, see [Testing guide](DOCS-TESTING.md). For information about testing the documentation, including code block testing, link validation, and style linting, see [Testing guide](DOCS-TESTING.md).
## Documentation Tools
This repository includes a `docs` CLI tool for common documentation workflows:
```sh
# Create new documentation from a draft
npx docs create drafts/new-feature.md --products influxdb3_core
# Edit existing documentation from a URL
npx docs edit https://docs.influxdata.com/influxdb3/core/admin/
# Add placeholder syntax to code blocks
npx docs placeholders content/influxdb3/core/admin/upgrade.md
# Get help
npx docs --help
```
The `docs` command is automatically configured when you run `yarn install`.
## Documentation ## Documentation
Comprehensive reference documentation for contributors: Comprehensive reference documentation for contributors:
@ -27,6 +47,7 @@ Comprehensive reference documentation for contributors:
- **[API Documentation](api-docs/README.md)** - API reference generation - **[API Documentation](api-docs/README.md)** - API reference generation
### Quick Links ### Quick Links
- [Style guidelines](DOCS-CONTRIBUTING.md#style-guidelines) - [Style guidelines](DOCS-CONTRIBUTING.md#style-guidelines)
- [Commit guidelines](DOCS-CONTRIBUTING.md#commit-guidelines) - [Commit guidelines](DOCS-CONTRIBUTING.md#commit-guidelines)
- [Code block testing](DOCS-TESTING.md#code-block-testing) - [Code block testing](DOCS-TESTING.md#code-block-testing)
@ -35,9 +56,9 @@ Comprehensive reference documentation for contributors:
InfluxData takes security and our users' trust very seriously. InfluxData takes security and our users' trust very seriously.
If you believe you have found a security issue in any of our open source projects, If you believe you have found a security issue in any of our open source projects,
please responsibly disclose it by contacting security@influxdata.com. please responsibly disclose it by contacting <security@influxdata.com>.
More details about security vulnerability reporting, More details about security vulnerability reporting,
including our GPG key, can be found at https://www.influxdata.com/how-to-report-security-vulnerabilities/. including our GPG key, can be found at <https://www.influxdata.com/how-to-report-security-vulnerabilities/>.
## Running the docs locally ## Running the docs locally
@ -58,7 +79,13 @@ including our GPG key, can be found at https://www.influxdata.com/how-to-report-
yarn install yarn install
``` ```
_**Note:** The most recent version of Hugo tested with this documentation is **0.149.0**._ ***Note:** The most recent version of Hugo tested with this documentation is **0.149.0**.*
After installation, the `docs` command will be available via `npx`:
```sh
npx docs --help
```
3. To generate the API docs, see [api-docs/README.md](api-docs/README.md). 3. To generate the API docs, see [api-docs/README.md](api-docs/README.md).
@ -71,6 +98,7 @@ including our GPG key, can be found at https://www.influxdata.com/how-to-report-
```sh ```sh
npx hugo server npx hugo server
``` ```
5. View the docs at [localhost:1313](http://localhost:1313). 5. View the docs at [localhost:1313](http://localhost:1313).
### Alternative: Use docker compose ### Alternative: Use docker compose
@ -84,4 +112,5 @@ including our GPG key, can be found at https://www.influxdata.com/how-to-report-
```sh ```sh
docker compose up local-dev docker compose up local-dev
``` ```
4. View the docs at [localhost:1313](http://localhost:1313). 4. View the docs at [localhost:1313](http://localhost:1313).

210
content/create.md Normal file
View File

@ -0,0 +1,210 @@
---
title: Create and edit InfluxData docs
description: Learn how to create and edit InfluxData documentation.
tags: [documentation, guide, influxdata]
test_only: true
---
Learn how to create and edit InfluxData documentation.
- [Submit an issue to request new or updated documentation](#submit-an-issue-to-request-new-or-updated-documentation)
- [Edit an existing page in your browser](#edit-an-existing-page-in-your-browser)
- [Create and edit locally with the docs-v2 repository](#create-and-edit-locally-with-the-docs-v2-repository)
- [Helpful resources](#other-resources)
## Submit an issue to request new or updated documentation
- **Public**: <https://github.com/influxdata/docs-v2/issues/>
- **Private**: <https://github.com/influxdata/DAR/issues/>
## Edit an existing page in your browser
**Example**: Editing a product-specific page
1. Visit <https://docs.influxdata.com> public docs
2. Search, Ask AI, or navigate to find the page to edit--for example, <https://docs.influxdata.com/influxdb3/cloud-serverless/get-started/>
3. Click the "Edit this page" link at the bottom of the page.
This opens the GitHub repository to the file that generates the page
4. Click the pencil icon to edit the file in your browser
5. [Commit and create a pull request](#commit-and-create-a-pull-request)
## Create and edit locally with the docs-v2 repository
Use `docs` scripts with AI agents to help you create and edit documentation locally, especially when working with shared content for multiple products.
**Prerequisites**:
1. [Clone or fork the docs-v2 repository](https://github.com/influxdata/docs-v2/):
```bash
git clone https://github.com/influxdata/docs-v2.git
cd docs-v2
```
2. [Install Yarn](https://yarnpkg.com/getting-started/install)
3. Run `yarn` in the repository root to install dependencies
4. Optional: [Set up GitHub CLI](https://cli.github.com/manual/)
> [!Tip]
> To run and test your changes locally, enter the following command in your terminal:
>
> ```bash
> yarn hugo server
> ```
>
> *To refresh shared content after making changes, `touch` or edit the frontmatter file, or stop the server (Ctrl+C) and restart it.*
>
> To list all available scripts, run:
>
> ```bash
> yarn run
> ```
### Edit an existing page locally
Use the `npx docs edit` command to open an existing page in your editor.
```bash
npx docs edit https://docs.influxdata.com/influxdb3/enterprise/get-started/
```
### Create content locally
Use the `npx docs create` command with your AI agent tool to scaffold frontmatter and generate new content.
- The `npx docs create` command accepts draft input from stdin or from a file path and generates a prompt file from the draft and your product selections
- The prompt file makes AI agents aware of InfluxData docs guidelines, shared content, and product-specific requirements
- `npx docs create` is designed to work automatically with `claude`, but you can
use the generated prompt file with any AI agent (for example, `copilot` or `codex`)
> [!Tip]
>
> `docs-v2` contains custom configuration for agents like Claude and Copilot Agent mode.
<!-- Coming soon: generate content from an issue with labels -->
#### Generate content and frontmatter from a draft
{{% tabs-wrapper %}}
{{% tabs %}}
[Interactive (Claude Code)](#)
[Non-interactive (any agent)](#)
{{% /tabs %}}
{{% tab-content %}}
{{% /tab-content %}}
{{% tab-content %}}
1. Open a Claude Code prompt:
```bash
claude code
```
2. In the prompt, run the `docs create` command with the path to your draft file.
Optionally, include the `--products` flag and product namespaces to preselect products--for example:
```bash
npx docs create .context/drafts/"Upgrading Enterprise 3 (draft).md" \
--products influxdb3_enterprise,influxdb3_core
```
If you don't include the `--products` flag, you'll be prompted to select products after running the command.
The script first generates a prompt file, then the agent automatically uses it to generate content and frontmatter based on the draft and the products you select.
{{% /tab-content %}}
{{% tab-content %}}
Use `npx docs create` to generate a prompt file and then pipe it to your preferred AI agent.
Include the `--products` flag and product namespaces to preselect products
The following example uses Copilot to process a draft file:
```bash
npx docs create .context/drafts/"Upgrading Enterprise 3 (draft).md" \
--products "influxdb3_enterprise,influxdb3_core" | \
copilot --prompt --allow-all-tools
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Review, commit, and create a pull request
After you create or edit content, test and review your changes, and then create a pull request.
> [!Important]
>
> #### Check AI-generated content
>
> Always review and validate AI-generated content for accuracy.
> Make sure example commands are correct for the version you're documenting.
### Test and review your changes
Run a local Hugo server to preview your changes:
```bash
yarn hugo server
```
Visit <http://localhost:1313> to review your changes in the browser.
> [!Note]
> If you need to preview changes in a live production-like environment
> that you can also share with others,
> the Docs team can deploy your branch to the staging site.
### Commit and create a pull request
1. Commit your changes to a new branch
2. Fix any issues found by automated checks
3. Push the branch to your fork or to the docs-v2 repository
```bash
git add content
git commit -m "feat(product): Your commit message"
git push origin your-branch-name
```
### Create a pull request
1. Create a pull request against the `master` branch of the docs-v2 repository
2. Add reviewers:
- `@influxdata/docs-team`
- team members familiar with the product area
- Optionally, assign Copilot to review
3. After approval and automated checks are successful, merge the pull request (if you have permissions) or wait for the docs team to merge it.
{{< tabs-wrapper >}}
{{% tabs %}}
[GitHub](#)
[gh CLI](#)
{{% /tabs %}}
{{% tab-content %}}
1. Visit [influxdata/docs-v2 pull requests on GitHub](https://github.com/influxdata/docs-v2/pulls)
2. Optional: edit PR title and description
3. Optional: set to draft if it needs more work
4. When ready for review, assign `@influxdata/docs-team` and other reviewers
{{% /tab-content %}}
{{% tab-content %}}
```bash
gh pr create \
--base master \
--head your-branch-name \
--title "Your PR title" \
--body "Your PR description" \
--reviewer influxdata/docs-team,<other-reviewers>
```
{{% /tab-content %}}
{{< /tabs-wrapper >}}
## Other resources
- `DOCS-*.md`: Documentation standards and guidelines
- <http://localhost:1313/example/>: View shortcode examples
- <https://app.kapa.ai>: Review content gaps identified from Ask AI answers

View File

@ -212,19 +212,6 @@ influxdb_cloud:
- How is Cloud 2 different from Cloud Serverless? - How is Cloud 2 different from Cloud Serverless?
- How do I manage auth tokens in InfluxDB Cloud 2? - How do I manage auth tokens in InfluxDB Cloud 2?
explorer:
name: InfluxDB 3 Explorer
namespace: explorer
menu_category: other
list_order: 4
versions: [v1]
latest: explorer
latest_patch: 1.1.0
ai_sample_questions:
- How do I use InfluxDB 3 Explorer to visualize data?
- How do I create a dashboard in InfluxDB 3 Explorer?
- How do I query data using InfluxDB 3 Explorer?
telegraf: telegraf:
name: Telegraf name: Telegraf
namespace: telegraf namespace: telegraf

View File

@ -4,6 +4,9 @@
"version": "1.0.0", "version": "1.0.0",
"description": "InfluxDB documentation", "description": "InfluxDB documentation",
"license": "MIT", "license": "MIT",
"bin": {
"docs": "scripts/docs-cli.js"
},
"resolutions": { "resolutions": {
"serialize-javascript": "^6.0.2" "serialize-javascript": "^6.0.2"
}, },
@ -40,6 +43,7 @@
"vanillajs-datepicker": "^1.3.4" "vanillajs-datepicker": "^1.3.4"
}, },
"scripts": { "scripts": {
"postinstall": "node scripts/setup-local-bin.js",
"docs:create": "node scripts/docs-create.js", "docs:create": "node scripts/docs-create.js",
"docs:edit": "node scripts/docs-edit.js", "docs:edit": "node scripts/docs-edit.js",
"docs:add-placeholders": "node scripts/add-placeholders.js", "docs:add-placeholders": "node scripts/add-placeholders.js",
@ -78,5 +82,8 @@
"test": "test" "test": "test"
}, },
"keywords": [], "keywords": [],
"author": "" "author": "",
"optionalDependencies": {
"copilot": "^0.0.2"
}
} }

View File

@ -1,108 +0,0 @@
# Add Placeholders Script
Automatically adds placeholder syntax to code blocks and placeholder descriptions in markdown files.
## What it does
This script finds UPPERCASE placeholders in code blocks and:
1. **Adds `{ placeholders="PATTERN1|PATTERN2" }` attribute** to code block fences
2. **Wraps placeholder descriptions** with `{{% code-placeholder-key %}}` shortcodes
## Usage
### Direct usage
```bash
# Process a single file
node scripts/add-placeholders.js <file.md>
# Dry run to preview changes
node scripts/add-placeholders.js <file.md> --dry
# Example
node scripts/add-placeholders.js content/influxdb3/enterprise/admin/upgrade.md
```
### Using npm script
```bash
# Process a file
yarn docs:add-placeholders <file.md>
# Dry run
yarn docs:add-placeholders <file.md> --dry
```
## Example transformations
### Before
````markdown
```bash
influxdb3 query \
--database SYSTEM_DATABASE \
--token ADMIN_TOKEN \
"SELECT * FROM system.version"
```
Replace the following:
- **`SYSTEM_DATABASE`**: The name of your system database
- **`ADMIN_TOKEN`**: An admin token with read permissions
````
### After
````markdown
```bash { placeholders="ADMIN_TOKEN|SYSTEM_DATABASE" }
influxdb3 query \
--database SYSTEM_DATABASE \
--token ADMIN_TOKEN \
"SELECT * FROM system.version"
```
Replace the following:
- {{% code-placeholder-key %}}`SYSTEM_DATABASE`{{% /code-placeholder-key %}}: The name of your system database
- {{% code-placeholder-key %}}`ADMIN_TOKEN`{{% /code-placeholder-key %}}: An admin token with read permissions
````
## How it works
### Placeholder detection
The script automatically detects UPPERCASE placeholders in code blocks using these rules:
- **Pattern**: Matches words with 2+ characters, all uppercase, can include underscores
- **Excludes common words**: HTTP verbs (GET, POST), protocols (HTTP, HTTPS), SQL keywords (SELECT, FROM), etc.
### Code block processing
1. Finds all code blocks (including indented ones)
2. Extracts UPPERCASE placeholders
3. Adds `{ placeholders="..." }` attribute to the fence line
4. Preserves indentation and language identifiers
### Description wrapping
1. Detects "Replace the following:" sections
2. Wraps placeholder descriptions matching `- **`PLACEHOLDER`**: description`
3. Preserves indentation and formatting
4. Skips already-wrapped descriptions
## Options
- `--dry` or `-d`: Preview changes without modifying files
## Notes
- The script is idempotent - running it multiple times on the same file won't duplicate syntax
- Preserves existing `placeholders` attributes in code blocks
- Works with both indented and non-indented code blocks
- Handles multiple "Replace the following:" sections in a single file
## Related documentation
- [DOCS-SHORTCODES.md](../DOCS-SHORTCODES.md) - Complete shortcode reference
- [DOCS-CONTRIBUTING.md](../DOCS-CONTRIBUTING.md) - Placeholder conventions and style guidelines

View File

@ -16,7 +16,7 @@ import { readFileSync, writeFileSync } from 'fs';
import { parseArgs } from 'node:util'; import { parseArgs } from 'node:util';
// Parse command-line arguments // Parse command-line arguments
const { positionals } = parseArgs({ const { positionals, values } = parseArgs({
allowPositionals: true, allowPositionals: true,
options: { options: {
dry: { dry: {
@ -24,19 +24,47 @@ const { positionals } = parseArgs({
short: 'd', short: 'd',
default: false, default: false,
}, },
help: {
type: 'boolean',
short: 'h',
default: false,
},
}, },
}); });
// Show help if requested
if (values.help) {
console.log(`
Add placeholder syntax to code blocks
Usage:
docs placeholders <file.md> [options]
Options:
--dry, -d Preview changes without modifying files
--help, -h Show this help message
Examples:
docs placeholders content/influxdb3/enterprise/admin/upgrade.md
docs placeholders content/influxdb3/core/admin/databases/create.md --dry
What it does:
1. Finds UPPERCASE placeholders in code blocks
2. Adds { placeholders="PATTERN1|PATTERN2" } attribute to code fences
3. Wraps placeholder descriptions with {{% code-placeholder-key %}} shortcodes
`);
process.exit(0);
}
if (positionals.length === 0) { if (positionals.length === 0) {
console.error('Usage: node scripts/add-placeholders.js <file.md> [--dry]'); console.error('Error: Missing file path argument');
console.error( console.error('Usage: docs placeholders <file.md> [--dry]');
'Example: node scripts/add-placeholders.js content/influxdb3/enterprise/admin/upgrade.md' console.error('Run "docs placeholders --help" for more information');
);
process.exit(1); process.exit(1);
} }
const filePath = positionals[0]; const filePath = positionals[0];
const isDryRun = process.argv.includes('--dry') || process.argv.includes('-d'); const isDryRun = values.dry;
/** /**
* Extract UPPERCASE placeholders from a code block * Extract UPPERCASE placeholders from a code block

236
scripts/docs-cli.js Executable file
View File

@ -0,0 +1,236 @@
#!/usr/bin/env node
/**
* Main CLI entry point for docs tools
* Supports subcommands: create, edit, placeholders
*
* Usage:
* docs create <draft-path> [options]
* docs edit <url> [options]
* docs placeholders <file.md> [options]
*/
import { fileURLToPath } from 'url';
import { dirname, join } from 'path';
import { spawn } from 'child_process';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
// Get subcommand and remaining arguments
const subcommand = process.argv[2];
const args = process.argv.slice(3);
// Map subcommands to script files
const subcommands = {
create: 'docs-create.js',
edit: 'docs-edit.js',
placeholders: 'add-placeholders.js',
};
/**
* Print usage information
*/
function printUsage() {
console.log(`
Usage: docs <command> [options]
Commands:
create <draft-path> Create new documentation from draft
edit <url> Edit existing documentation
placeholders <file.md> Add placeholder syntax to code blocks
test Run test suite to verify CLI functionality
Examples:
docs create drafts/new-feature.md --products influxdb3_core
docs edit https://docs.influxdata.com/influxdb3/core/admin/
docs placeholders content/influxdb3/core/admin/upgrade.md
docs test
For command-specific help:
docs create --help
docs edit --help
docs placeholders --help
`);
}
// Handle test command (async, so don't continue)
if (subcommand === 'test') {
runTests();
} else if (!subcommand || subcommand === '--help' || subcommand === '-h') {
// Handle no subcommand or help
printUsage();
process.exit(subcommand ? 0 : 1);
} else if (!subcommands[subcommand]) {
// Validate subcommand
console.error(`Error: Unknown command '${subcommand}'`);
console.error(`Run 'docs --help' for usage information`);
process.exit(1);
} else {
// Execute the appropriate script
const scriptPath = join(__dirname, subcommands[subcommand]);
const child = spawn('node', [scriptPath, ...args], {
stdio: 'inherit',
env: process.env,
});
child.on('exit', (code) => {
process.exit(code || 0);
});
child.on('error', (err) => {
console.error(`Failed to execute ${subcommand}:`, err.message);
process.exit(1);
});
}
/**
* Test function to verify docs CLI functionality
* Run with: npx docs test
*/
function runTests() {
import('child_process').then(({ execSync }) => {
const tests = [];
const testResults = [];
console.log('\n🧪 Testing docs CLI functionality...\n');
// Test 1: docs --help
tests.push({
name: 'docs --help',
command: 'npx docs --help',
expectedInOutput: [
'create <draft-path>',
'edit <url>',
'placeholders <file.md>',
],
});
// Test 2: docs create --help
tests.push({
name: 'docs create --help',
command: 'npx docs create --help',
expectedInOutput: [
'Documentation Content Scaffolding',
'--products',
'Pipe to external agent',
],
});
// Test 3: docs edit --help
tests.push({
name: 'docs edit --help',
command: 'npx docs edit --help',
expectedInOutput: ['Documentation File Opener', '--list'],
});
// Test 4: docs placeholders --help
tests.push({
name: 'docs placeholders --help',
command: 'npx docs placeholders --help',
expectedInOutput: [
'Add placeholder syntax',
'--dry',
'code-placeholder-key',
],
});
// Test 5: docs placeholders with missing args shows error
tests.push({
name: 'docs placeholders (no args)',
command: 'npx docs placeholders 2>&1',
expectedInOutput: ['Error: Missing file path'],
expectFailure: true,
});
// Test 6: Verify symlink exists
tests.push({
name: 'symlink exists',
command: 'ls -la node_modules/.bin/docs',
expectedInOutput: ['scripts/docs-cli.js'],
});
// Test 7: Unknown command shows error
tests.push({
name: 'unknown command',
command: 'npx docs invalid-command 2>&1',
expectedInOutput: ['Error: Unknown command'],
expectFailure: true,
});
// Run tests
for (const test of tests) {
try {
const output = execSync(test.command, {
encoding: 'utf8',
stdio: 'pipe',
});
const passed = test.expectedInOutput.every((expected) =>
output.includes(expected)
);
if (passed) {
console.log(`${test.name}`);
testResults.push({ name: test.name, passed: true });
} else {
console.log(`${test.name} - Expected output not found`);
console.log(` Expected: ${test.expectedInOutput.join(', ')}`);
testResults.push({
name: test.name,
passed: false,
reason: 'Expected output not found',
});
}
} catch (error) {
if (test.expectFailure) {
// Expected to fail - check if error output contains expected strings
const errorOutput =
error.stderr?.toString() || error.stdout?.toString() || '';
const passed = test.expectedInOutput.every((expected) =>
errorOutput.includes(expected)
);
if (passed) {
console.log(`${test.name} (expected failure)`);
testResults.push({ name: test.name, passed: true });
} else {
console.log(`${test.name} - Expected error message not found`);
console.log(` Expected: ${test.expectedInOutput.join(', ')}`);
testResults.push({
name: test.name,
passed: false,
reason: 'Expected error message not found',
});
}
} else {
console.log(`${test.name} - Command failed unexpectedly`);
console.log(` Error: ${error.message}`);
testResults.push({
name: test.name,
passed: false,
reason: error.message,
});
}
}
}
const passed = testResults.filter((r) => r.passed).length;
const failed = testResults.filter((r) => !r.passed).length;
console.log(`\n📊 Test Results: ${passed}/${tests.length} passed`);
if (failed > 0) {
console.log(`\n❌ Failed tests:`);
testResults
.filter((r) => !r.passed)
.forEach((r) => {
console.log(` - ${r.name}: ${r.reason}`);
});
process.exit(1);
} else {
console.log(`\n✅ All tests passed!\n`);
process.exit(0);
}
});
}

View File

@ -23,7 +23,12 @@ import {
loadProducts, loadProducts,
analyzeStructure, analyzeStructure,
} from './lib/content-scaffolding.js'; } from './lib/content-scaffolding.js';
import { writeJson, readJson, fileExists } from './lib/file-operations.js'; import {
writeJson,
readJson,
fileExists,
readDraft,
} from './lib/file-operations.js';
import { parseMultipleURLs } from './lib/url-parser.js'; import { parseMultipleURLs } from './lib/url-parser.js';
const __filename = fileURLToPath(import.meta.url); const __filename = fileURLToPath(import.meta.url);
@ -36,6 +41,7 @@ const REPO_ROOT = join(__dirname, '..');
const TMP_DIR = join(REPO_ROOT, '.tmp'); const TMP_DIR = join(REPO_ROOT, '.tmp');
const CONTEXT_FILE = join(TMP_DIR, 'scaffold-context.json'); const CONTEXT_FILE = join(TMP_DIR, 'scaffold-context.json');
const PROPOSAL_FILE = join(TMP_DIR, 'scaffold-proposal.yml'); const PROPOSAL_FILE = join(TMP_DIR, 'scaffold-proposal.yml');
const PROMPT_FILE = join(TMP_DIR, 'scaffold-prompt.txt');
// Colors for console output // Colors for console output
const colors = { const colors = {
@ -49,25 +55,53 @@ const colors = {
}; };
/** /**
* Print colored output * Print colored output to stderr (so it doesn't interfere with piped output)
*/ */
function log(message, color = 'reset') { function log(message, color = 'reset') {
console.log(`${colors[color]}${message}${colors.reset}`); // Write to stderr so logs don't interfere with stdout (prompt path/text)
console.error(`${colors[color]}${message}${colors.reset}`);
}
/**
* Check if running in Claude Code environment
* @returns {boolean} True if Task function is available (Claude Code)
*/
function isClaudeCode() {
return typeof Task !== 'undefined';
}
/**
* Output prompt for use with external tools
* @param {string} prompt - The generated prompt text
* @param {boolean} printPrompt - If true, force print to stdout
*/
function outputPromptForExternalUse(prompt, printPrompt = false) {
// Auto-detect if stdout is being piped
const isBeingPiped = !process.stdout.isTTY;
// Print prompt text if explicitly requested OR if being piped
const shouldPrintText = printPrompt || isBeingPiped;
if (shouldPrintText) {
// Output prompt text to stdout
console.log(prompt);
} else {
// Write prompt to file and output file path
writeFileSync(PROMPT_FILE, prompt, 'utf8');
console.log(PROMPT_FILE);
}
process.exit(0);
} }
/** /**
* Prompt user for input (works in TTY and non-TTY environments) * Prompt user for input (works in TTY and non-TTY environments)
*/ */
async function promptUser(question) { async function promptUser(question) {
// For non-TTY environments, return empty string
if (!process.stdin.isTTY) {
return '';
}
const readline = await import('readline'); const readline = await import('readline');
const rl = readline.createInterface({ const rl = readline.createInterface({
input: process.stdin, input: process.stdin,
output: process.stdout, output: process.stdout,
terminal: process.stdin.isTTY !== undefined ? process.stdin.isTTY : true,
}); });
return new Promise((resolve) => { return new Promise((resolve) => {
@ -91,30 +125,28 @@ function divider() {
function parseArguments() { function parseArguments() {
const { values, positionals } = parseArgs({ const { values, positionals } = parseArgs({
options: { options: {
draft: { type: 'string' }, 'from-draft': { type: 'string' },
from: { type: 'string' },
url: { type: 'string', multiple: true }, url: { type: 'string', multiple: true },
urls: { type: 'string' }, urls: { type: 'string' },
products: { type: 'string' }, products: { type: 'string' },
ai: { type: 'string', default: 'claude' }, ai: { type: 'string', default: 'claude' },
execute: { type: 'boolean', default: false }, execute: { type: 'boolean', default: false },
'context-only': { type: 'boolean', default: false }, 'context-only': { type: 'boolean', default: false },
'print-prompt': { type: 'boolean', default: false },
proposal: { type: 'string' }, proposal: { type: 'string' },
'dry-run': { type: 'boolean', default: false }, 'dry-run': { type: 'boolean', default: false },
yes: { type: 'boolean', default: false }, yes: { type: 'boolean', default: false },
help: { type: 'boolean', default: false }, help: { type: 'boolean', default: false },
'follow-external': { type: 'boolean', default: false },
}, },
allowPositionals: true, allowPositionals: true,
}); });
// First positional argument is treated as draft path // First positional argument is treated as draft path
if (positionals.length > 0 && !values.draft && !values.from) { if (positionals.length > 0 && !values['from-draft']) {
values.draft = positionals[0]; values.draft = positionals[0];
} } else if (values['from-draft']) {
values.draft = values['from-draft'];
// --from is an alias for --draft
if (values.from && !values.draft) {
values.draft = values.from;
} }
// Normalize URLs into array // Normalize URLs into array
@ -141,63 +173,101 @@ function printUsage() {
${colors.bright}Documentation Content Scaffolding${colors.reset} ${colors.bright}Documentation Content Scaffolding${colors.reset}
${colors.bright}Usage:${colors.reset} ${colors.bright}Usage:${colors.reset}
yarn docs:create <draft-path> Create from draft docs create <draft-path> Create from draft
yarn docs:create --url <url> --draft <path> Create at URL with draft content docs create --url <url> --from-draft <path> Create at URL with draft
# Or use with yarn:
yarn docs:create <draft-path>
yarn docs:create --url <url> --from-draft <path>
${colors.bright}Options:${colors.reset} ${colors.bright}Options:${colors.reset}
<draft-path> Path to draft markdown file (positional argument) <draft-path> Path to draft markdown file (positional argument)
--draft <path> Path to draft markdown file --from-draft <path> Path to draft markdown file
--from <path> Alias for --draft
--url <url> Documentation URL for new content location --url <url> Documentation URL for new content location
--products <list> Comma-separated product keys (required for stdin)
Examples: influxdb3_core, influxdb3_enterprise
--follow-external Include external (non-docs.influxdata.com) URLs
when extracting links from draft. Without this flag,
only local documentation links are followed.
--context-only Stop after context preparation --context-only Stop after context preparation
(for non-Claude tools) (for non-Claude tools)
--print-prompt Force prompt text output (auto-enabled when piping)
--proposal <path> Import and execute proposal from JSON file --proposal <path> Import and execute proposal from JSON file
--dry-run Show what would be created without creating --dry-run Show what would be created without creating
--yes Skip confirmation prompt --yes Skip confirmation prompt
--help Show this help message --help Show this help message
${colors.bright}Workflow (Create from draft):${colors.reset} ${colors.bright}Stdin Support:${colors.reset}
When piping content from stdin, you must specify target products:
cat draft.md | docs create --products influxdb3_core
echo "# Content" | docs create --products influxdb3_core,influxdb3_enterprise
${colors.bright}Link Following:${colors.reset}
By default, the script extracts links from your draft and prompts you
to select which ones to include as context. This helps the AI:
- Maintain consistent terminology
- Avoid duplicating content
- Add appropriate \`related\` frontmatter links
Local documentation links are always available for selection.
Use --follow-external to also include external URLs (GitHub, etc.)
${colors.bright}Workflow (Inside Claude Code):${colors.reset}
1. Create a draft markdown file with your content 1. Create a draft markdown file with your content
2. Run: yarn docs:create drafts/new-feature.md 2. Run: docs create drafts/new-feature.md
3. Script runs all agents automatically 3. Script runs all agents automatically
4. Review and confirm to create files 4. Review and confirm to create files
${colors.bright}Workflow (Create at specific URL):${colors.reset} ${colors.bright}Workflow (Pipe to external agent):${colors.reset}
1. Create draft: vim drafts/new-feature.md 1. Create draft: vim drafts/new-feature.md
2. Run: yarn docs:create \\ 2. Pipe to your AI tool (prompt auto-detected):
--url https://docs.influxdata.com/influxdb3/core/admin/new-feature/ \\ docs create drafts/new-feature.md --products X | claude -p
--draft drafts/new-feature.md docs create drafts/new-feature.md --products X | copilot -p
3. Script determines structure from URL and uses draft content 3. AI generates files based on prompt
4. Review and confirm to create files
${colors.bright}Workflow (Manual - for non-Claude tools):${colors.reset}
1. Prepare context:
yarn docs:create --context-only drafts/new-feature.md
2. Run your AI tool with templates from scripts/templates/
3. Save proposal to .tmp/scaffold-proposal.json
4. Execute:
yarn docs:create --proposal .tmp/scaffold-proposal.json
${colors.bright}Examples:${colors.reset} ${colors.bright}Examples:${colors.reset}
# Create from draft (AI determines location) # Inside Claude Code - automatic execution
docs create drafts/new-feature.md
# Pipe to external AI tools - prompt auto-detected
docs create drafts/new-feature.md --products influxdb3_core | claude -p
docs create drafts/new-feature.md --products influxdb3_core | copilot -p
# Pipe from stdin
cat drafts/quick-note.md | docs create --products influxdb3_core | claude -p
echo "# Quick note" | docs create --products influxdb3_core | copilot -p
# Get prompt file path (when not piping)
docs create drafts/new-feature.md # Outputs: .tmp/scaffold-prompt.txt
# Still works with yarn
yarn docs:create drafts/new-feature.md yarn docs:create drafts/new-feature.md
# Create at specific URL with draft content # Include external links for context selection
yarn docs:create --url /influxdb3/core/admin/new-feature/ \\ docs create --follow-external drafts/api-guide.md
--draft drafts/new-feature.md
# Preview changes ${colors.bright}Smart Behavior:${colors.reset}
yarn docs:create --draft drafts/new-feature.md --dry-run INSIDE Claude Code:
Automatically runs Task() agent to generate files
PIPING to another tool:
Auto-detects piping and outputs prompt text
No --print-prompt flag needed
INTERACTIVE (not piping):
Outputs prompt file path: .tmp/scaffold-prompt.txt
Use with: code .tmp/scaffold-prompt.txt
${colors.bright}Note:${colors.reset} ${colors.bright}Note:${colors.reset}
To edit existing pages, use: yarn docs:edit <url> To edit existing pages, use: docs edit <url>
`); `);
} }
/** /**
* Phase 1a: Prepare context from URLs * Phase 1a: Prepare context from URLs
*/ */
async function prepareURLPhase(urls, draftPath, options) { async function prepareURLPhase(urls, draftPath, options, stdinContent = null) {
log('\n🔍 Analyzing URLs and finding files...', 'bright'); log('\n🔍 Analyzing URLs and finding files...', 'bright');
try { try {
@ -258,9 +328,18 @@ async function prepareURLPhase(urls, draftPath, options) {
// Build context (include URL analysis) // Build context (include URL analysis)
let context = null; let context = null;
if (draftPath) { let draft;
if (stdinContent) {
// Use stdin content
draft = stdinContent;
log('✓ Using draft from stdin', 'green');
context = prepareContext(draft);
} else if (draftPath) {
// Use draft content if provided // Use draft content if provided
context = prepareContext(draftPath); draft = readDraft(draftPath);
draft.path = draftPath;
context = prepareContext(draft);
} else { } else {
// Minimal context for editing existing pages // Minimal context for editing existing pages
const products = loadProducts(); const products = loadProducts();
@ -351,18 +430,83 @@ async function prepareURLPhase(urls, draftPath, options) {
/** /**
* Phase 1b: Prepare context from draft * Phase 1b: Prepare context from draft
*/ */
async function preparePhase(draftPath, options) { async function preparePhase(draftPath, options, stdinContent = null) {
log('\n🔍 Analyzing draft and repository structure...', 'bright'); log('\n🔍 Analyzing draft and repository structure...', 'bright');
let draft;
// Handle stdin vs file
if (stdinContent) {
draft = stdinContent;
log('✓ Using draft from stdin', 'green');
} else {
// Validate draft exists // Validate draft exists
if (!fileExists(draftPath)) { if (!fileExists(draftPath)) {
log(`✗ Draft file not found: ${draftPath}`, 'red'); log(`✗ Draft file not found: ${draftPath}`, 'red');
process.exit(1); process.exit(1);
} }
draft = readDraft(draftPath);
draft.path = draftPath;
}
try { try {
// Prepare context // Prepare context
const context = prepareContext(draftPath); const context = prepareContext(draft);
// Extract links from draft
const { extractLinks, followLocalLinks, fetchExternalLinks } = await import(
'./lib/content-scaffolding.js'
);
const links = extractLinks(draft.content);
if (links.localFiles.length > 0 || links.external.length > 0) {
// Filter external links if flag not set
if (!options['follow-external']) {
links.external = [];
}
// Let user select which external links to follow
// (local files are automatically included)
const selected = await selectLinksToFollow(links);
// Follow selected links
const linkedContent = [];
if (selected.selectedLocal.length > 0) {
log('\n📄 Loading local files...', 'cyan');
// Determine base path for resolving relative links
const basePath = draft.path
? dirname(join(REPO_ROOT, draft.path))
: REPO_ROOT;
const localResults = followLocalLinks(selected.selectedLocal, basePath);
linkedContent.push(...localResults);
const successCount = localResults.filter((r) => !r.error).length;
log(`✓ Loaded ${successCount} local file(s)`, 'green');
}
if (selected.selectedExternal.length > 0) {
log('\n🌐 Fetching external URLs...', 'cyan');
const externalResults = await fetchExternalLinks(
selected.selectedExternal
);
linkedContent.push(...externalResults);
const successCount = externalResults.filter((r) => !r.error).length;
log(`✓ Fetched ${successCount} external page(s)`, 'green');
}
// Add to context
if (linkedContent.length > 0) {
context.linkedContent = linkedContent;
// Show any errors
const errors = linkedContent.filter((lc) => lc.error);
if (errors.length > 0) {
log('\n⚠ Some links could not be loaded:', 'yellow');
errors.forEach((e) => log(`${e.url}: ${e.error}`, 'yellow'));
}
}
}
// Write context to temp file // Write context to temp file
writeJson(CONTEXT_FILE, context); writeJson(CONTEXT_FILE, context);
@ -382,6 +526,12 @@ async function preparePhase(draftPath, options) {
`✓ Found ${context.structure.existingPaths.length} existing pages`, `✓ Found ${context.structure.existingPaths.length} existing pages`,
'green' 'green'
); );
if (context.linkedContent) {
log(
`✓ Included ${context.linkedContent.length} linked page(s) as context`,
'green'
);
}
log( log(
`✓ Prepared context → ${CONTEXT_FILE.replace(REPO_ROOT, '.')}`, `✓ Prepared context → ${CONTEXT_FILE.replace(REPO_ROOT, '.')}`,
'green' 'green'
@ -441,25 +591,69 @@ async function selectProducts(context, options) {
} }
} }
// Sort products: detected first, then alphabetically within each group
allProducts.sort((a, b) => {
const aDetected = detected.includes(a);
const bDetected = detected.includes(b);
// Detected products first
if (aDetected && !bDetected) return -1;
if (!aDetected && bDetected) return 1;
// Then alphabetically
return a.localeCompare(b);
});
// Case 1: Explicit flag provided // Case 1: Explicit flag provided
if (options.products) { if (options.products) {
const requested = options.products.split(',').map((p) => p.trim()); const requestedKeys = options.products.split(',').map((p) => p.trim());
const invalid = requested.filter((p) => !allProducts.includes(p));
if (invalid.length > 0) { // Map product keys to display names
const requestedNames = [];
const invalidKeys = [];
for (const key of requestedKeys) {
const product = context.products[key];
if (product) {
// Valid product key found
if (product.versions && product.versions.length > 1) {
// Multi-version product: add all versions
product.versions.forEach((version) => {
const displayName = `${product.name} ${version}`;
if (allProducts.includes(displayName)) {
requestedNames.push(displayName);
}
});
} else {
// Single version product
if (allProducts.includes(product.name)) {
requestedNames.push(product.name);
}
}
} else if (allProducts.includes(key)) {
// It's already a display name (backwards compatibility)
requestedNames.push(key);
} else {
invalidKeys.push(key);
}
}
if (invalidKeys.length > 0) {
const validKeys = Object.keys(context.products).join(', ');
log( log(
`\n✗ Invalid products: ${invalid.join(', ')}\n` + `\n✗ Invalid product keys: ${invalidKeys.join(', ')}\n` +
`Valid products: ${allProducts.join(', ')}`, `Valid keys: ${validKeys}`,
'red' 'red'
); );
process.exit(1); process.exit(1);
} }
log( log(
`✓ Using products from --products flag: ${requested.join(', ')}`, `✓ Using products from --products flag: ${requestedNames.join(', ')}`,
'green' 'green'
); );
return requested; return requestedNames;
} }
// Case 2: Unambiguous (single product detected) // Case 2: Unambiguous (single product detected)
@ -514,6 +708,74 @@ async function selectProducts(context, options) {
return selected; return selected;
} }
/**
* Prompt user to select which external links to include
* Local file paths are automatically followed
* @param {object} links - {localFiles, external} from extractLinks
* @returns {Promise<object>} {selectedLocal, selectedExternal}
*/
async function selectLinksToFollow(links) {
// Local files are followed automatically (no user prompt)
// External links require user selection
if (links.external.length === 0) {
return {
selectedLocal: links.localFiles || [],
selectedExternal: [],
};
}
log('\n🔗 Found external links in draft:\n', 'bright');
const allLinks = [];
let index = 1;
// Show external links for selection
links.external.forEach((link) => {
log(` ${index}. ${link}`, 'yellow');
allLinks.push({ type: 'external', url: link });
index++;
});
const answer = await promptUser(
'\nSelect external links to include as context ' +
'(comma-separated numbers, or "all"): '
);
if (!answer || answer.toLowerCase() === 'none') {
return {
selectedLocal: links.localFiles || [],
selectedExternal: [],
};
}
let selectedIndices;
if (answer.toLowerCase() === 'all') {
selectedIndices = Array.from({ length: allLinks.length }, (_, i) => i);
} else {
selectedIndices = answer
.split(',')
.map((s) => parseInt(s.trim()) - 1)
.filter((i) => i >= 0 && i < allLinks.length);
}
const selectedExternal = [];
selectedIndices.forEach((i) => {
const link = allLinks[i];
selectedExternal.push(link.url);
});
log(
`\n✓ Following ${links.localFiles?.length || 0} local file(s) ` +
`and ${selectedExternal.length} external link(s)`,
'green'
);
return {
selectedLocal: links.localFiles || [],
selectedExternal,
};
}
/** /**
* Run single content generator agent with direct file generation (Claude Code) * Run single content generator agent with direct file generation (Claude Code)
*/ */
@ -577,6 +839,30 @@ function generateClaudePrompt(
**Target Products**: Use \`context.selectedProducts\` field (${selectedProducts.join(', ')}) **Target Products**: Use \`context.selectedProducts\` field (${selectedProducts.join(', ')})
**Mode**: ${mode === 'edit' ? 'Edit existing content' : 'Create new documentation'} **Mode**: ${mode === 'edit' ? 'Edit existing content' : 'Create new documentation'}
${isURLBased ? `**URLs**: ${context.urls.map((u) => u.url).join(', ')}` : ''} ${isURLBased ? `**URLs**: ${context.urls.map((u) => u.url).join(', ')}` : ''}
${
context.linkedContent?.length > 0
? `
**Linked References**: The draft references ${context.linkedContent.length} page(s) from existing documentation.
These are provided for context to help you:
- Maintain consistent terminology and style
- Avoid duplicating existing content
- Understand related concepts and their structure
- Add appropriate links to the \`related\` frontmatter field
Linked content details available in \`context.linkedContent\`:
${context.linkedContent
.map((lc) =>
lc.error
? `- ❌ ${lc.url} (${lc.error})`
: `- ✓ [${lc.type}] ${lc.title} (${lc.path || lc.url})`
)
.join('\n')}
**Important**: Use this content for context and reference, but do not copy it verbatim. Consider adding relevant pages to the \`related\` field in frontmatter.
`
: ''
}
**Your Task**: Generate complete documentation files directly (no proposal step). **Your Task**: Generate complete documentation files directly (no proposal step).
@ -908,16 +1194,40 @@ async function executePhase(options) {
async function main() { async function main() {
const options = parseArguments(); const options = parseArguments();
// Show help // Show help first (don't wait for stdin)
if (options.help) { if (options.help) {
printUsage(); printUsage();
process.exit(0); process.exit(0);
} }
// Check for stdin only if no draft file was provided
const hasStdin = !process.stdin.isTTY;
let stdinContent = null;
if (hasStdin && !options.draft) {
// Stdin requires --products option
if (!options.products) {
log(
'\n✗ Error: --products is required when piping content from stdin',
'red'
);
log(
'Example: echo "# Content" | yarn docs:create --products influxdb3_core',
'yellow'
);
process.exit(1);
}
// Import readDraftFromStdin
const { readDraftFromStdin } = await import('./lib/file-operations.js');
log('📥 Reading draft from stdin...', 'cyan');
stdinContent = await readDraftFromStdin();
}
// Determine workflow // Determine workflow
if (options.url && options.url.length > 0) { if (options.url && options.url.length > 0) {
// URL-based workflow requires draft content // URL-based workflow requires draft content
if (!options.draft) { if (!options.draft && !stdinContent) {
log('\n✗ Error: --url requires --draft <path>', 'red'); log('\n✗ Error: --url requires --draft <path>', 'red');
log('The --url option specifies WHERE to create content.', 'yellow'); log('The --url option specifies WHERE to create content.', 'yellow');
log( log(
@ -934,29 +1244,75 @@ async function main() {
process.exit(1); process.exit(1);
} }
const context = await prepareURLPhase(options.url, options.draft, options); const context = await prepareURLPhase(
options.url,
options.draft,
options,
stdinContent
);
if (options['context-only']) { if (options['context-only']) {
// Stop after context preparation // Stop after context preparation
process.exit(0); process.exit(0);
} }
// Continue with AI analysis (Phase 2) // Generate prompt for product selection
const selectedProducts = await selectProducts(context, options);
const mode = context.urls?.length > 0 ? 'create' : 'create';
const isURLBased = true;
const hasExistingContent =
context.existingContent &&
Object.keys(context.existingContent).length > 0;
const prompt = generateClaudePrompt(
context,
selectedProducts,
mode,
isURLBased,
hasExistingContent
);
// Check environment and handle prompt accordingly
if (!isClaudeCode()) {
// Not in Claude Code: output prompt for external use
outputPromptForExternalUse(prompt, options['print-prompt']);
}
// In Claude Code: continue with AI analysis (Phase 2)
log('\n🤖 Running AI analysis with specialized agents...\n', 'bright'); log('\n🤖 Running AI analysis with specialized agents...\n', 'bright');
await runAgentAnalysis(context, options); await runAgentAnalysis(context, options);
// Execute proposal (Phase 3) // Execute proposal (Phase 3)
await executePhase(options); await executePhase(options);
} else if (options.draft) { } else if (options.draft || stdinContent) {
// Draft-based workflow // Draft-based workflow (from file or stdin)
const context = await preparePhase(options.draft, options); const context = await preparePhase(options.draft, options, stdinContent);
if (options['context-only']) { if (options['context-only']) {
// Stop after context preparation // Stop after context preparation
process.exit(0); process.exit(0);
} }
// Continue with AI analysis (Phase 2) // Generate prompt for product selection
const selectedProducts = await selectProducts(context, options);
const mode = 'create';
const isURLBased = false;
const prompt = generateClaudePrompt(
context,
selectedProducts,
mode,
isURLBased,
false
);
// Check environment and handle prompt accordingly
if (!isClaudeCode()) {
// Not in Claude Code: output prompt for external use
outputPromptForExternalUse(prompt, options['print-prompt']);
}
// In Claude Code: continue with AI analysis (Phase 2)
log('\n🤖 Running AI analysis with specialized agents...\n', 'bright'); log('\n🤖 Running AI analysis with specialized agents...\n', 'bright');
await runAgentAnalysis(context, options); await runAgentAnalysis(context, options);

View File

@ -4,7 +4,7 @@
*/ */
import { readdirSync, readFileSync, existsSync, statSync } from 'fs'; import { readdirSync, readFileSync, existsSync, statSync } from 'fs';
import { join, dirname } from 'path'; import { join, dirname, resolve } from 'path';
import { fileURLToPath } from 'url'; import { fileURLToPath } from 'url';
import yaml from 'js-yaml'; import yaml from 'js-yaml';
import matter from 'gray-matter'; import matter from 'gray-matter';
@ -314,12 +314,19 @@ export function findSiblingWeights(dirPath) {
/** /**
* Prepare complete context for AI analysis * Prepare complete context for AI analysis
* @param {string} draftPath - Path to draft file * @param {string|object} draftPathOrObject - Path to draft file or draft object
* @returns {object} Context object * @returns {object} Context object
*/ */
export function prepareContext(draftPath) { export function prepareContext(draftPathOrObject) {
// Read draft // Read draft - handle both file path and draft object
const draft = readDraft(draftPath); let draft;
if (typeof draftPathOrObject === 'string') {
draft = readDraft(draftPathOrObject);
draft.path = draftPathOrObject;
} else {
// Already a draft object from stdin
draft = draftPathOrObject;
}
// Load products // Load products
const products = loadProducts(); const products = loadProducts();
@ -349,7 +356,7 @@ export function prepareContext(draftPath) {
// Build context // Build context
const context = { const context = {
draft: { draft: {
path: draftPath, path: draft.path || draftPathOrObject,
content: draft.content, content: draft.content,
existingFrontmatter: draft.frontmatter, existingFrontmatter: draft.frontmatter,
}, },
@ -616,7 +623,7 @@ export function detectSharedContent(filePath) {
if (parsed.data && parsed.data.source) { if (parsed.data && parsed.data.source) {
return parsed.data.source; return parsed.data.source;
} }
} catch (error) { } catch (_error) {
// Can't parse, assume not shared // Can't parse, assume not shared
return null; return null;
} }
@ -663,13 +670,13 @@ export function findSharedContentVariants(sourcePath) {
const relativePath = fullPath.replace(REPO_ROOT + '/', ''); const relativePath = fullPath.replace(REPO_ROOT + '/', '');
variants.push(relativePath); variants.push(relativePath);
} }
} catch (error) { } catch (_error) {
// Skip files that can't be parsed // Skip files that can't be parsed
continue; continue;
} }
} }
} }
} catch (error) { } catch (_error) {
// Skip directories we can't read // Skip directories we can't read
} }
} }
@ -758,3 +765,127 @@ export function analyzeURLs(parsedURLs) {
return results; return results;
} }
/**
* Extract and categorize links from markdown content
* @param {string} content - Markdown content
* @returns {object} {localFiles: string[], external: string[]}
*/
export function extractLinks(content) {
const localFiles = [];
const external = [];
// Match markdown links: [text](url)
const linkRegex = /\[([^\]]+)\]\(([^)]+)\)/g;
let match;
while ((match = linkRegex.exec(content)) !== null) {
const url = match[2];
// Skip anchor links and mailto
if (url.startsWith('#') || url.startsWith('mailto:')) {
continue;
}
// Local file paths (relative paths) - automatically followed
if (url.startsWith('../') || url.startsWith('./')) {
localFiles.push(url);
}
// All HTTP/HTTPS URLs (including docs.influxdata.com) - user selects
else if (url.startsWith('http://') || url.startsWith('https://')) {
external.push(url);
}
// Absolute paths starting with / are ignored (no base context to resolve)
}
return {
localFiles: [...new Set(localFiles)],
external: [...new Set(external)],
};
}
/**
* Follow local file links (relative paths)
* @param {string[]} links - Array of relative file paths
* @param {string} basePath - Base path to resolve relative links from
* @returns {object[]} Array of {url, title, content, path, frontmatter}
*/
export function followLocalLinks(links, basePath = REPO_ROOT) {
const results = [];
for (const link of links) {
try {
// Resolve relative path from base path
const filePath = resolve(basePath, link);
// Check if file exists
if (existsSync(filePath)) {
const fileContent = readFileSync(filePath, 'utf8');
const parsed = matter(fileContent);
results.push({
url: link,
title: parsed.data?.title || 'Untitled',
content: parsed.content,
path: filePath.replace(REPO_ROOT + '/', ''),
frontmatter: parsed.data,
type: 'local',
});
} else {
results.push({
url: link,
error: 'File not found',
type: 'local',
});
}
} catch (error) {
results.push({
url: link,
error: error.message,
type: 'local',
});
}
}
return results;
}
/**
* Fetch external URLs
* @param {string[]} urls - Array of external URLs
* @returns {Promise<object[]>} Array of {url, title, content, type}
*/
export async function fetchExternalLinks(urls) {
// Dynamic import axios
const axios = (await import('axios')).default;
const results = [];
for (const url of urls) {
try {
const response = await axios.get(url, {
timeout: 10000,
headers: { 'User-Agent': 'InfluxData-Docs-Bot/1.0' },
});
// Extract title from HTML or use URL
const titleMatch = response.data.match(/<title>([^<]+)<\/title>/i);
const title = titleMatch ? titleMatch[1] : url;
results.push({
url,
title,
content: response.data,
type: 'external',
contentType: response.headers['content-type'],
});
} catch (error) {
results.push({
url,
error: error.message,
type: 'external',
});
}
}
return results;
}

View File

@ -28,6 +28,38 @@ export function readDraft(filePath) {
}; };
} }
/**
* Read draft content from stdin
* @returns {Promise<{content: string, frontmatter: object, raw: string, path: string}>}
*/
export async function readDraftFromStdin() {
return new Promise((resolve, reject) => {
let data = '';
process.stdin.setEncoding('utf8');
process.stdin.on('data', (chunk) => {
data += chunk;
});
process.stdin.on('end', () => {
try {
// Parse with gray-matter to extract frontmatter if present
const parsed = matter(data);
resolve({
content: parsed.content,
frontmatter: parsed.data || {},
raw: data,
path: '<stdin>',
});
} catch (error) {
reject(error);
}
});
process.stdin.on('error', reject);
});
}
/** /**
* Write a markdown file with frontmatter * Write a markdown file with frontmatter
* @param {string} filePath - Path to write to * @param {string} filePath - Path to write to

43
scripts/setup-local-bin.js Executable file
View File

@ -0,0 +1,43 @@
#!/usr/bin/env node
/**
* Setup script to make the `docs` command available locally after yarn install.
* Creates a symlink in node_modules/.bin/docs pointing to scripts/docs-cli.js
*/
import { fileURLToPath } from 'url';
import { dirname, join } from 'path';
import { existsSync, mkdirSync, symlinkSync, unlinkSync, chmodSync } from 'fs';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const rootDir = join(__dirname, '..');
const binDir = join(rootDir, 'node_modules', '.bin');
const binLink = join(binDir, 'docs');
const targetScript = join(rootDir, 'scripts', 'docs-cli.js');
try {
// Ensure node_modules/.bin directory exists
if (!existsSync(binDir)) {
mkdirSync(binDir, { recursive: true });
}
// Remove existing symlink if it exists
if (existsSync(binLink)) {
unlinkSync(binLink);
}
// Create symlink
symlinkSync(targetScript, binLink, 'file');
// Ensure the target script is executable
chmodSync(targetScript, 0o755);
console.log('✓ Created local `docs` command in node_modules/.bin/');
console.log(' You can now use: npx docs <command>');
console.log(' Or add node_modules/.bin to your PATH for direct access');
} catch (error) {
console.error('Failed to setup local docs command:', error.message);
process.exit(1);
}

View File

@ -7,6 +7,7 @@ You are analyzing a documentation draft to generate an intelligent file structur
**Context file**: `.tmp/scaffold-context.json` **Context file**: `.tmp/scaffold-context.json`
Read and analyze the context file, which contains: Read and analyze the context file, which contains:
- **draft**: The markdown content and any existing frontmatter - **draft**: The markdown content and any existing frontmatter
- **products**: Available InfluxDB products (Core, Enterprise, Cloud, etc.) - **products**: Available InfluxDB products (Core, Enterprise, Cloud, etc.)
- **productHints**: Products mentioned or suggested based on content analysis - **productHints**: Products mentioned or suggested based on content analysis
@ -54,6 +55,7 @@ For each file, create complete frontmatter with:
### 4. Code Sample Considerations ### 4. Code Sample Considerations
Based on `versionInfo`: Based on `versionInfo`:
- Use version-specific CLI commands (influxdb3, influx, influxctl) - Use version-specific CLI commands (influxdb3, influx, influxctl)
- Reference appropriate API endpoints (/api/v3, /api/v2) - Reference appropriate API endpoints (/api/v3, /api/v2)
- Note testing requirements from `conventions.testing` - Note testing requirements from `conventions.testing`
@ -61,6 +63,7 @@ Based on `versionInfo`:
### 5. Style Compliance ### 5. Style Compliance
Follow conventions from `conventions.namingRules`: Follow conventions from `conventions.namingRules`:
- Files: Use lowercase with hyphens (e.g., `manage-databases.md`) - Files: Use lowercase with hyphens (e.g., `manage-databases.md`)
- Directories: Use lowercase with hyphens - Directories: Use lowercase with hyphens
- Shared content: Place in appropriate `/content/shared/` subdirectory - Shared content: Place in appropriate `/content/shared/` subdirectory
@ -133,4 +136,8 @@ Generate a JSON proposal matching the schema in `scripts/schemas/scaffold-propos
4. Generate complete frontmatter for all files 4. Generate complete frontmatter for all files
5. Save the proposal to `.tmp/scaffold-proposal.json` 5. Save the proposal to `.tmp/scaffold-proposal.json`
The proposal will be validated and used by `yarn docs:create --proposal .tmp/scaffold-proposal.json` to create the files. The following command validates and creates files from the proposal:
```bash
npx docs create --proposal .tmp/scaffold-proposal.json
```

View File

@ -194,6 +194,11 @@
resolved "https://registry.yarnpkg.com/@evilmartians/lefthook/-/lefthook-1.12.3.tgz#081eca59a6d33646616af844244ce6842cd6b5a5" resolved "https://registry.yarnpkg.com/@evilmartians/lefthook/-/lefthook-1.12.3.tgz#081eca59a6d33646616af844244ce6842cd6b5a5"
integrity sha512-MtXIt8h+EVTv5tCGLzh9UwbA/LRv6esdPJOHlxr8NDKHbFnbo8PvU5uVQcm3PAQTd4DZN3HoyokqrwGwntoq6w== integrity sha512-MtXIt8h+EVTv5tCGLzh9UwbA/LRv6esdPJOHlxr8NDKHbFnbo8PvU5uVQcm3PAQTd4DZN3HoyokqrwGwntoq6w==
"@github/copilot@latest":
version "0.0.353"
resolved "https://registry.yarnpkg.com/@github/copilot/-/copilot-0.0.353.tgz#3c8d8a072b3defbd2200c9fe4fb636d633ac7f1e"
integrity sha512-OYgCB4Jf7Y/Wor8mNNQcXEt1m1koYm/WwjGsr5mwABSVYXArWUeEfXqVbx+7O87ld5b+aWy2Zaa2bzKV8dmqaw==
"@humanfs/core@^0.19.1": "@humanfs/core@^0.19.1":
version "0.19.1" version "0.19.1"
resolved "https://registry.yarnpkg.com/@humanfs/core/-/core-0.19.1.tgz#17c55ca7d426733fe3c561906b8173c336b40a77" resolved "https://registry.yarnpkg.com/@humanfs/core/-/core-0.19.1.tgz#17c55ca7d426733fe3c561906b8173c336b40a77"
@ -1364,6 +1369,13 @@ confbox@^0.2.2:
resolved "https://registry.yarnpkg.com/confbox/-/confbox-0.2.2.tgz#8652f53961c74d9e081784beed78555974a9c110" resolved "https://registry.yarnpkg.com/confbox/-/confbox-0.2.2.tgz#8652f53961c74d9e081784beed78555974a9c110"
integrity sha512-1NB+BKqhtNipMsov4xI/NnhCKp9XG9NamYp5PVm9klAT0fsrNPjaFICsCFhNhwZJKNh7zB/3q8qXz0E9oaMNtQ== integrity sha512-1NB+BKqhtNipMsov4xI/NnhCKp9XG9NamYp5PVm9klAT0fsrNPjaFICsCFhNhwZJKNh7zB/3q8qXz0E9oaMNtQ==
copilot@^0.0.2:
version "0.0.2"
resolved "https://registry.yarnpkg.com/copilot/-/copilot-0.0.2.tgz#4712810c9182cd784820ed44627bedd32dd377f9"
integrity sha512-nedf34AaYj9JnFhRmiJEZemAno2WDXMypq6FW5aCVR0N+QdpQ6viukP1JpvJDChpaMEVvbUkMjmjMifJbO/AgQ==
dependencies:
"@github/copilot" latest
core-util-is@1.0.2: core-util-is@1.0.2:
version "1.0.2" version "1.0.2"
resolved "https://registry.yarnpkg.com/core-util-is/-/core-util-is-1.0.2.tgz#b5fd54220aa2bc5ab57aab7140c940754503c1a7" resolved "https://registry.yarnpkg.com/core-util-is/-/core-util-is-1.0.2.tgz#b5fd54220aa2bc5ab57aab7140c940754503c1a7"