AscendC Operator Interface Documentation Generation
Extract interface information from operator source code to generate PyTorch official documentation-style Chinese API interface documentation.
Prerequisites: Compilation and testing completed (Phase 3 finished), all the following files are ready:
- — Contains Python call schema registered with
- — Contains C++ function declarations
csrc/ops/<op_name>/design.md
— Contains algorithm description, parameter explanation, dtype support, constraint conditions
csrc/ops/<op_name>/op_host/<op_name>.cpp
— Contains TORCH_CHECK constraints and actual parameter processing logic
- — Contains runnable call examples
Workflow
Information Extraction → Documentation Structure Assembly → File Generation → Display in Chat Interface
Phase 1: Information Extraction
Extract all information required for interface documentation from the following source files. MUST read these files one by one, no skipping allowed.
1.1 Extract Python Call Signature from
Locate the line
and extract the complete schema string.
Extracted Content:
- Function name
- Parameter list (including type and default value)
- Return type
Example:
m.def("acosh(Tensor self) -> Tensor");
→ Signature:
torch.ops.npu.acosh(self) → Tensor
Schema Type to Python Type Mapping:
| Schema Type | Python Documentation Type | Example |
|---|
| Tensor | Mandatory tensor parameter |
| Tensor, optional | Optional tensor parameter |
| Tensor | In-place modified tensor |
| int | Integer parameter |
| int, optional | Optional integer parameter |
| list[int] | Integer list |
| list[int] | Fixed-length integer list |
| float | Floating-point parameter |
| bool | Boolean parameter |
| str, optional | Optional string parameter |
1.2 Extract C++ Function Declaration from
Locate the complete C++ signature of the corresponding function in the
.
Extracted Content:
- Return type (e.g., , )
- Parameter types and parameter names
- Whether parameters are const references / optional
1.3 Extract Algorithm and Design Information from
Extracted Content:
| Extraction Item | Design Document Section | Documentation Usage |
|---|
| Algorithm description / mathematical formula | Computational logic design | Function description paragraph |
| Parameter semantic explanation | Operator interface definition | Parameter explanation paragraph |
| Supported data types | Operator interface definition | Supported data types paragraph |
| Input and output shape constraints | Tiling strategy / Operator interface definition | Shape paragraph |
| Valid input range / constraints | Notes / Interface definition | Constraint conditions paragraph |
1.4 Extract Runtime Constraints from
Search all
statements and extract:
| TORCH_CHECK Content | Documentation Usage |
|---|
| Dimension check () | Shape constraint |
| dtype check () | Supported data types |
| Value range check (numerical range limit) | Constraint conditions |
| Parameter mutual exclusion/dependency | Constraint conditions / Parameter explanation |
1.5 Extract Usage Examples from
Locate the most concise and complete call example (prefer code from
or
) and extract:
- Input tensor construction method
- Operator call statement (
torch.ops.npu.<op_name>(...)
)
- Output processing method
Phase 2: Documentation Structure Assembly
Assemble the documentation content according to the following
fixed structure. Strictly follow the format of PyTorch official documentation (e.g.,
,
),
the main body of the documentation should be in Chinese.
Documentation Template
markdown
# torch.ops.npu.<op_name>
torch.ops.npu.<op_name>(<param1>, <param2>, ..., <paramN>) → <ReturnType>
<Function description: 1-3 Chinese sentences explaining what the operator does. If there are mathematical formulas, use inline LaTeX formulas to display them.>
<If there are mathematical formulas, display them in independent formula blocks:>
$$
<Formula>
$$
## Parameter Explanation
- **<param1>** (*<type>*) – <Chinese description>
- **<param2>** (*<type>*) – <Chinese description>
- **<param3>** (*<type>, optional*) – <Chinese description>. Default value: `<default value>`
## Supported Data Types
`torch.float16`, `torch.bfloat16`, `torch.float32`
## Shape
- **Input**: <Shape description, use mathematical symbols such as (N, *), (S, N, D), etc.>
- **Output**: <Shape description>
<If there are additional shape rules, supplement them in list form>
## Constraint Conditions
- <Constraint condition 1>
- <Constraint condition 2>
## Usage Examples
```python
>>> import torch
>>> import torch_npu
>>> import ascend_kernel
>>> <Construct input>
>>> <Call operator>
>>> <Display output>
Return Value
<Return type> – <Chinese description of return value>
### Detailed Specifications for Each Paragraph
#### Title Signature
- Format: `torch.ops.npu.<op_name>(<parameter list>) → <return type>`
- Parameter list is extracted from the schema in `register.cpp`, only parameter names are retained (without types)
- Parameters with default values are written as `<name>=<default>`
- Return type: `Tensor`, `tuple[Tensor, ...]`, `None` (corresponding to C++ `void`)
#### Function Description
- Written in Chinese
- The first sentence summarizes the operator's function
- If there are mathematical formulas, display them using LaTeX
- If there are multiple working modes (controlled by parameters), explain them separately
#### Parameter Explanation
- One parameter per line, format: `- **<name>** (*<type>*) – <Chinese description>`
- Optional parameters are marked with `optional`: `- **<name>** (*<type>, optional*) – <Chinese description>. Default value: \`<value>\``
- Parameter description should include: semantic explanation, shape requirements (if applicable), valid value range (if applicable)
- Tensor parameters should explain the shape format, e.g., "Shape: `(S, N, D)`"
- Explain the meaning of each value for boolean/enumeration parameters
#### Supported Data Types
- List all PyTorch dtypes supported by the operator
- Format: `` `torch.float16`, `torch.bfloat16`, `torch.float32` ``
- Cross-verify from op_host TORCH_CHECK and design.md
#### Shape
- Use Chinese labels (**Input**, **Output**)
- Describe the shape semantics of input and output tensors
- Use uppercase letters to represent the meaning of each dimension, e.g., `(N, C, H, W)` — N: batch size, C: channel count, ...
- If the shape changes with parameters, explain separately
#### Constraint Conditions
- Written in Chinese
- List all constraint conditions checked in TORCH_CHECK
- List the valid input ranges mentioned in design.md
- If there are mutual exclusion/dependency relationships between parameters, explain clearly
#### Usage Examples
- Provide complete code snippets that can be directly run on NPU
- Include `import` statements
- Use `>>>` prefix (doctest style)
- Use small-size input data (for easy display)
- Show at least 1 typical use case
- If there are multiple usage modes, show 1 for each
#### Return Value
- Described in Chinese
- Format: `*<type>* – <Chinese description>`
- Explain the shape and dtype of the returned tensor (note "consistent with input dtype" if it matches the input)
- If multiple values are returned (tuple), explain each one
---
## Phase 3: File Generation
Write the assembled documentation to:
ascend-kernel/csrc/ops/<op_name>/README.md
**File Writing Rules**:
- If README.md already exists, **overwrite** the old content
- Use UTF-8 encoding
- Use standard LaTeX syntax for mathematical formulas (`$...$` for inline, `$$...$$` for block-level)
---
## Phase 4: Display Documentation in Interactive Interface (MANDATORY)
After file generation, **MUST** directly output the complete content of README.md to the chat interface.
**Display Format**:
Interface Documentation Generated
File Path:
ascend-kernel/csrc/ops/<op_name>/README.md
<Complete README.md content>
```
Requirements:
- Display the complete documentation content without truncation
- Display the file path for users to check
- If a paragraph cannot be filled due to insufficient information, mark it with and remind the user to supplement it
Complete Example
The following is a hypothetical interface documentation example for the
operator, showing the final generated effect:
Note: This example is only used to demonstrate the documentation format. All information is extracted from source code during actual generation.
markdown
# torch.ops.npu.acosh
```
torch.ops.npu.acosh(self) → Tensor
```
Calculate the inverse hyperbolic cosine value of each element in the input tensor.
$$
\text{out}_i = \cosh^{-1}(\text{input}_i) = \ln(\text{input}_i + \sqrt{\text{input}_i^2 - 1})
$$
## Parameter Explanation
- **self** (*Tensor*) – Input tensor, element values must be $\geq 1$. Supports any shape.
## Supported Data Types
`torch.float16`, `torch.float32`
## Shape
- **Input**: $(*)$, supports any shape
- **Output**: $(*)$, same shape as input
## Constraint Conditions
- Only supports `float16` and `float32` data types
- All elements of the input tensor must be $\geq 1$, otherwise the result is `NaN`
## Usage Examples
```python
>>> import torch
>>> import torch_npu
>>> import ascend_kernel
>>> x = torch.tensor([1.0, 2.0, 3.0, 10.0], dtype=torch.float32, device="npu:0")
>>> output = torch.ops.npu.acosh(x)
>>> output
tensor([0.0000, 1.3170, 1.7627, 2.9932], device='npu:0')
```
## Return Value
*Tensor* – Result of inverse hyperbolic cosine calculation, same shape as input, consistent dtype with input.
Checklist
After documentation generation, verify item by item according to the following checklist:
Anti-Pattern Checklist
- NEVER fabricate parameter or dtype information, all information must be extracted from source code
- NEVER skip extracting TORCH_CHECK constraints
- NEVER use parameter names or types inconsistent with the register.cpp schema
- NEVER omit the usage examples paragraph
- NEVER only output the file path without displaying the complete documentation content in the chat interface
- NEVER modify operator source code, this skill is for read-only documentation generation
- NEVER write the main body of the documentation in English (except title signature, code, and mathematical formulas)
Readable File Scope
| File | Read Content |
|---|
| Python call schema () |
| C++ function declarations |
csrc/ops/<op_name>/design.md
| Algorithm description, parameter explanation, dtype, constraints |
csrc/ops/<op_name>/op_host/<op_name>.cpp
| TORCH_CHECK constraints, parameter processing logic |
| Usage examples |