The ChatFacile model, better known as Devoir Facile, is the most widely used AI education agent in France. The model is only accessible via API here for free. Its advantages include being free and responding quickly (in less than a second).
The ChatFacile model, better known as Devoir Facile, is the most widely used AI education agent in France. The model is only accessible via API here for free. Its advantages include being free and responding quickly (in less than a second).
Average response time < 200ms
99.9% uptime guarantee
End-to-end encrypted
You're viewing documentation for version v1 of the API. Please ensure you're using the correct endpoint in your requests.
All API calls must be authenticated using your API key. Include your API key in the
Authorization
header as a Bearer token.
curl -X POST "https://api.lannetech.com/v1/generate-text" \
-H "Authorization: Bearer 36b9b0d6e57e6de4d233125d3acd6a9535f539b1c9f4b586f2219ef2ea4e9152" \
-H "Content-Type: application/json"
Keep your API key secure and never share it in public repositories or client-side code. You can regenerate your API key from the dashboard if needed.
All API requests should be made to this base URL followed by the endpoint path.
All API requests must use HTTPS. Requests over plain HTTP will fail.
Send data as JSON in the request body with Content-Type: application/json.
Send the following parameters in the request body as JSON.
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model identifier to use for this request |
messages | array | Yes | Parameter messages |
max_tokens | integer | Yes | Maximum number of tokens to generate |
temperature | double | Yes | Controls generation creativity (0.0 to 1.0) |
{
"model": "chatfacile",
"messages": [
{
"role": "user",
"content": "Write a short story about AI"
}
],
"max_tokens": 100,
"temperature": 0.7
}
This API uses an asynchronous generation process. This means the generation happens in two steps:
First, make a POST request with your generation parameters. You'll receive a request_id in response.
// Response from initial request
{
"text": "your prompt",
"model": "riffusion",
"request_id": "11aeb1db-1860-4550-b221-7cbfa1169732",
"status": "pending",
"created": 1745023903
}
Then, use the request_id to check the generation status until it's complete.
// Response from status check
{
"model": "riffusion",
"request_id": "11aeb1db-1860-4550-b221-7cbfa1169732",
"status": "success",
"audio_url": "https://storage.googleapis.com/...",
"created": 1745023903
}
To check the status of your generation, make a POST request to the same endpoint with only the request_id:
curl -X POST "https://api.lannetech.com/v1/generate-text" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"request_id": "11aeb1db-1860-4550-b221-7cbfa1169732"
}'
For audio models like Riffusion, you can use the wait_for_result parameter to automatically wait until the generation is complete:
curl -X POST "https://api.lannetech.com/v1/generate-text" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "A calm and relaxing piano track with soft melodies",
"model": "riffusion",
"wait_for_result": true
}'
wait_for_result is set to true, the API waits up to 60 seconds for the generation to complete{
"model": "riffusion",
"request_id": "c9d0596caafdef1a72fa596b464eb1b7",
"status": "complete",
"audio_url": "https://kieaifiles.erweima.ai/ZmE3ZjEyNWUtNzEwMS00YjFiLTlmYWQtMzkzMmMyYzZiMDU4.mp3",
"data": {
"task_id": "c9d0596caafdef1a72fa596b464eb1b7",
"status": "complete",
"callback_type": "first",
"data": [
{
"id": "8551****662c",
"audio_url": "https://kieaifiles.erweima.ai/ZmE3ZjEyNWUtNzEwMS00YjFiLTlmYWQtMzkzMmMyYzZiMDU4.mp3",
"stream_audio_url": "https://example.cn/****",
"prompt": "A calm and relaxing piano track with soft melodies",
"model_name": "chirp-v3-5",
"duration": 198.44
}
],
"received_at": "2025-01-01 00:00:00"
},
"created": 1746291951
}
| Status | Description |
|---|---|
pending |
The generation is still in progress |
success |
Generation completed successfully, result is available |
complete |
Generation completed successfully (equivalent to success), result is available |
failed |
Generation failed, check error message |
request_id if you need to check the status laterBelow are examples showing how to use this API in different programming languages.
curl -X POST "https://api.lannetech.com/v1/generate-text" \
-H "Authorization: Bearer 36b9b0d6e57e6de4d233125d3acd6a9535f539b1c9f4b586f2219ef2ea4e9152" \
-H "Content-Type: application/json" \
-d '{
"model": "chatfacile",
"messages": [
{
"role": "user",
"content": "Write a short story about AI"
}
],
"max_tokens": 100,
"temperature": 0.7
}'
The API returns responses in JSON format. Below is an example of a successful response.
[]
The API uses standard HTTP status codes to indicate the success or failure of requests.
| Code | Status | Description |
|---|---|---|
| 200 | OK | The request was successful. The response contains the requested data. |
| 400 | Bad Request | The request was invalid. Check the request parameters and format. |
| 401 | Unauthorized | Authentication failed. Check your API key. |
| 402 | Payment Required | Insufficient credits for this operation. Please add more credits to your account. |
| 429 | Too Many Requests | Rate limit exceeded. Slow down your request rate. |
| 500 | Server Error | An error occurred on the server. Try again later or contact support if the issue persists. |
| 504 | Gateway Timeout | The operation took too long to complete. Try again with simplified parameters. |
To ensure fair usage and service stability, API requests are subject to rate limiting.
API responses include headers to help you track your rate limit usage:
X-RateLimit-Limit: Maximum requests allowed in the current periodX-RateLimit-Remaining: Number of requests remaining in the current periodX-RateLimit-Reset: Time (in seconds) until the limit resetsIf you have questions or run into issues, our developer support team is ready to help.