YouTube Comments API: Get Comments Programmatically
Complete developer guide to the YouTube Comments API. Learn to retrieve comments, handle pagination, manage rate limits, and build applications that analyze YouTube engagement using the YouTube Data API v3.
Key Takeaways
- YouTube Comments API is part of YouTube Data API v3
- Free tier: 10,000 quota units per day (no credit card required)
- Main endpoints: commentThreads.list and comments.list
- Each API call costs 1 quota unit regardless of results returned
- Pagination required for videos with >100 comments
- OAuth 2.0 required for posting/modifying comments; API key sufficient for reading
The YouTube Comments API is part of YouTube Data API v3, allowing developers to programmatically retrieve, post, and manage comments on YouTube videos. To get comments, use the commentThreads.list endpoint with a video ID and API key. The free tier provides 10,000 quota units daily—enough to retrieve approximately 1 million comments per day with efficient coding.
Key Takeaways
- YouTube Comments API is part of YouTube Data API v3
- Free tier: 10,000 quota units per day (no credit card required)
- Main endpoints: commentThreads.list and comments.list
- Each API call costs 1 quota unit regardless of results returned
- Pagination required for videos with >100 comments
- OAuth 2.0 required for posting/modifying comments; API key sufficient for reading
Getting Started
Step 1: Create a Google Cloud Project
- 1.Go to Google Cloud Console
- 2.Click "Create Project"
- 3.Name your project (e.g., "YouTube Comment Analysis")
- 4.Click "Create"
Step 2: Enable YouTube Data API v3
- 1.In Cloud Console, go to "APIs & Services" > "Library"
- 2.Search for "YouTube Data API v3"
- 3.Click the API, then click "Enable"
Step 3: Create API Credentials
For read-only access (retrieving comments):
- 1.Go to "APIs & Services" > "Credentials"
- 2.Click "Create Credentials" > "API Key"
- 3.Copy and securely store your API key
- 4.(Optional) Restrict key to YouTube Data API v3 only
For write access (posting comments):
- 1.Create OAuth 2.0 Client ID instead
- 2.Configure consent screen
- 3.Download client secrets JSON
- 4.Implement OAuth flow in your application
API Endpoints Overview
CommentThreads Endpoint
Retrieves top-level comments and optionally their replies.
GET https://www.googleapis.com/youtube/v3/commentThreadsUse for:
- Getting all comments on a video
- Retrieving comments with their reply threads
- Sorting by relevance (top comments) or time
Comments Endpoint
Retrieves individual comments or replies to a specific comment.
GET https://www.googleapis.com/youtube/v3/commentsUse for:
- Getting replies to a specific comment
- Retrieving comment details by ID
- Updating or deleting comments (with OAuth)
Retrieving Comments: Code Examples
Python: Basic Comment Retrieval
import requests
API_KEY = 'YOUR_API_KEY'
VIDEO_ID = 'dQw4w9WgXcQ'
def get_comments(video_id, api_key, max_results=100):
"""Retrieve comments from a YouTube video."""
url = 'https://www.googleapis.com/youtube/v3/commentThreads'
params = {
'part': 'snippet,replies',
'videoId': video_id,
'key': api_key,
'maxResults': max_results,
'order': 'relevance' # or 'time' for newest first
}
comments = []
while True:
response = requests.get(url, params=params)
data = response.json()
if 'error' in data:
print(f"Error: {data['error']['message']}")
break
for item in data.get('items', []):
# Get top-level comment
top_comment = item['snippet']['topLevelComment']['snippet']
comment_data = {
'id': item['id'],
'author': top_comment['authorDisplayName'],
'author_channel_id': top_comment.get('authorChannelId', {}).get('value', ''),
'text': top_comment['textDisplay'],
'likes': top_comment['likeCount'],
'published_at': top_comment['publishedAt'],
'updated_at': top_comment['updatedAt'],
'reply_count': item['snippet']['totalReplyCount'],
'replies': []
}
# Get replies if available
if 'replies' in item:
for reply in item['replies']['comments']:
reply_snippet = reply['snippet']
comment_data['replies'].append({
'id': reply['id'],
'author': reply_snippet['authorDisplayName'],
'text': reply_snippet['textDisplay'],
'likes': reply_snippet['likeCount'],
'published_at': reply_snippet['publishedAt']
})
comments.append(comment_data)
# Check for more pages
if 'nextPageToken' in data:
params['pageToken'] = data['nextPageToken']
else:
break
return comments
# Usage
comments = get_comments(VIDEO_ID, API_KEY)
print(f"Retrieved {len(comments)} comment threads")Python: Export to JSON/CSV
import json
import csv
def export_to_json(comments, filename):
"""Export comments to JSON file."""
with open(filename, 'w', encoding='utf-8') as f:
json.dump(comments, f, indent=2, ensure_ascii=False)
def export_to_csv(comments, filename):
"""Export comments to CSV file."""
with open(filename, 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f)
writer.writerow(['id', 'author', 'text', 'likes', 'published_at', 'reply_count'])
for comment in comments:
writer.writerow([
comment['id'],
comment['author'],
comment['text'],
comment['likes'],
comment['published_at'],
comment['reply_count']
])
# Usage
comments = get_comments(VIDEO_ID, API_KEY)
export_to_json(comments, 'comments.json')
export_to_csv(comments, 'comments.csv')JavaScript: Node.js Implementation
const axios = require('axios');
const API_KEY = 'YOUR_API_KEY';
const VIDEO_ID = 'dQw4w9WgXcQ';
async function getComments(videoId, apiKey, maxResults = 100) {
const baseUrl = 'https://www.googleapis.com/youtube/v3/commentThreads';
const comments = [];
let pageToken = null;
do {
const params = {
part: 'snippet,replies',
videoId: videoId,
key: apiKey,
maxResults: maxResults,
order: 'relevance'
};
if (pageToken) {
params.pageToken = pageToken;
}
try {
const response = await axios.get(baseUrl, { params });
const data = response.data;
for (const item of data.items || []) {
const topComment = item.snippet.topLevelComment.snippet;
const commentData = {
id: item.id,
author: topComment.authorDisplayName,
text: topComment.textDisplay,
likes: topComment.likeCount,
publishedAt: topComment.publishedAt,
replyCount: item.snippet.totalReplyCount,
replies: []
};
if (item.replies) {
for (const reply of item.replies.comments) {
commentData.replies.push({
id: reply.id,
author: reply.snippet.authorDisplayName,
text: reply.snippet.textDisplay,
likes: reply.snippet.likeCount
});
}
}
comments.push(commentData);
}
pageToken = data.nextPageToken;
} catch (error) {
console.error('Error:', error.response?.data?.error?.message || error.message);
break;
}
} while (pageToken);
return comments;
}
// Usage
(async () => {
const comments = await getComments(VIDEO_ID, API_KEY);
console.log(`Retrieved ${comments.length} comment threads`);
})();API Parameters Reference
commentThreads.list Parameters
| Parameter | Required | Description |
|---|---|---|
part | Yes | Data to return: snippet, replies, id |
videoId | Conditional | Video to get comments from |
channelId | Conditional | Get all comments on a channel |
id | Conditional | Specific comment thread IDs |
key | Yes | Your API key |
maxResults | No | 1-100, default 20 |
pageToken | No | Token for pagination |
order | No | relevance or time |
searchTerms | No | Filter by search terms |
Response Fields
{
"kind": "youtube#commentThreadListResponse",
"pageInfo": {
"totalResults": 1000,
"resultsPerPage": 100
},
"nextPageToken": "QURTSl...",
"items": [
{
"kind": "youtube#commentThread",
"id": "UgyxQmZp...",
"snippet": {
"channelId": "UC...",
"videoId": "dQw4...",
"topLevelComment": {
"snippet": {
"authorDisplayName": "@Username",
"authorProfileImageUrl": "https://...",
"authorChannelUrl": "http://youtube.com/...",
"authorChannelId": { "value": "UC..." },
"textDisplay": "Comment text here",
"textOriginal": "Comment text here",
"likeCount": 42,
"publishedAt": "2026-01-07T10:30:00Z",
"updatedAt": "2026-01-07T10:30:00Z"
}
},
"totalReplyCount": 5
},
"replies": {
"comments": [...]
}
}
]
}Quota Management
Understanding Quota Costs
| Operation | Quota Cost |
|---|---|
| commentThreads.list | 1 unit |
| comments.list | 1 unit |
| comments.insert | 50 units |
| comments.update | 50 units |
| comments.delete | 50 units |
| comments.markAsSpam | 50 units |
Daily Quota Limits
| Tier | Daily Quota | Approx. Comment Reads |
|---|---|---|
| Free | 10,000 units | ~1,000,000 comments |
| Standard | 1,000,000 units | ~100,000,000 comments |
Optimizing Quota Usage
1. Maximize results per request:
params['maxResults'] = 100 # Always use maximum2. Only request needed parts:
# If you don't need replies
params['part'] = 'snippet' # Instead of 'snippet,replies'3. Cache results:
import json
from datetime import datetime
def cache_comments(comments, video_id):
cache_data = {
'video_id': video_id,
'cached_at': datetime.now().isoformat(),
'comments': comments
}
with open(f'cache_{video_id}.json', 'w') as f:
json.dump(cache_data, f)
def load_cached_comments(video_id, max_age_hours=24):
try:
with open(f'cache_{video_id}.json', 'r') as f:
cache_data = json.load(f)
cached_time = datetime.fromisoformat(cache_data['cached_at'])
if (datetime.now() - cached_time).total_seconds() < max_age_hours * 3600:
return cache_data['comments']
except FileNotFoundError:
pass
return None4. Implement exponential backoff:
import time
def get_comments_with_retry(video_id, api_key, max_retries=5):
for attempt in range(max_retries):
try:
return get_comments(video_id, api_key)
except Exception as e:
if '403' in str(e): # Quota exceeded
wait_time = (2 ** attempt) * 60 # Exponential backoff
print(f"Quota exceeded. Waiting {wait_time}s...")
time.sleep(wait_time)
else:
raise
raise Exception("Max retries exceeded")Handling Pagination
Basic Pagination
def get_all_comments(video_id, api_key):
all_comments = []
page_token = None
page_count = 0
while True:
params = {
'part': 'snippet,replies',
'videoId': video_id,
'key': api_key,
'maxResults': 100
}
if page_token:
params['pageToken'] = page_token
response = requests.get(
'https://www.googleapis.com/youtube/v3/commentThreads',
params=params
)
data = response.json()
all_comments.extend(data.get('items', []))
page_count += 1
print(f"Page {page_count}: Retrieved {len(data.get('items', []))} comments")
page_token = data.get('nextPageToken')
if not page_token:
break
print(f"Total: {len(all_comments)} comment threads from {page_count} pages")
return all_commentsGetting All Replies
By default, commentThreads only returns up to 5 replies per comment. For comments with more replies:
def get_all_replies(parent_id, api_key):
"""Get all replies to a specific comment."""
all_replies = []
page_token = None
while True:
params = {
'part': 'snippet',
'parentId': parent_id,
'key': api_key,
'maxResults': 100
}
if page_token:
params['pageToken'] = page_token
response = requests.get(
'https://www.googleapis.com/youtube/v3/comments',
params=params
)
data = response.json()
all_replies.extend(data.get('items', []))
page_token = data.get('nextPageToken')
if not page_token:
break
return all_replies
# Usage: For comments with totalReplyCount > 5
for comment in comments:
if comment['snippet']['totalReplyCount'] > 5:
all_replies = get_all_replies(comment['id'], API_KEY)
print(f"Comment {comment['id']} has {len(all_replies)} replies")Error Handling
Common Error Codes
| Code | Error | Solution |
|---|---|---|
| 400 | Bad Request | Check parameter format |
| 403 | Forbidden | Quota exceeded or API not enabled |
| 403 | commentsDisabled | Video has comments disabled |
| 404 | Not Found | Invalid video ID |
| 500 | Internal Error | YouTube server issue, retry |
Robust Error Handling
import requests
from time import sleep
class YouTubeAPIError(Exception):
pass
def get_comments_robust(video_id, api_key):
url = 'https://www.googleapis.com/youtube/v3/commentThreads'
params = {
'part': 'snippet',
'videoId': video_id,
'key': api_key,
'maxResults': 100
}
try:
response = requests.get(url, params=params, timeout=30)
response.raise_for_status()
data = response.json()
if 'error' in data:
error = data['error']
code = error.get('code')
message = error.get('message', 'Unknown error')
if code == 403:
if 'commentsDisabled' in message:
print(f"Comments are disabled for video {video_id}")
return []
elif 'quotaExceeded' in message:
raise YouTubeAPIError("Daily quota exceeded")
raise YouTubeAPIError(f"API Error {code}: {message}")
return data.get('items', [])
except requests.exceptions.Timeout:
raise YouTubeAPIError("Request timed out")
except requests.exceptions.RequestException as e:
raise YouTubeAPIError(f"Request failed: {str(e)}")Use Cases and Examples
1. Comment Analytics Dashboard
from collections import Counter
from datetime import datetime
def analyze_comments(comments):
"""Generate analytics from comment data."""
analysis = {
'total_comments': len(comments),
'total_likes': sum(c['likes'] for c in comments),
'total_replies': sum(c['reply_count'] for c in comments),
'unique_authors': len(set(c['author'] for c in comments)),
'avg_likes': 0,
'top_commenters': [],
'comments_by_date': {}
}
if comments:
analysis['avg_likes'] = analysis['total_likes'] / len(comments)
# Top commenters
author_counts = Counter(c['author'] for c in comments)
analysis['top_commenters'] = author_counts.most_common(10)
# Comments by date
for comment in comments:
date = comment['published_at'][:10] # YYYY-MM-DD
analysis['comments_by_date'][date] = analysis['comments_by_date'].get(date, 0) + 1
return analysis2. Keyword Monitoring
import re
def find_keyword_mentions(comments, keywords):
"""Find comments mentioning specific keywords."""
results = {kw: [] for kw in keywords}
for comment in comments:
text = comment['text'].lower()
for keyword in keywords:
if re.search(rf'\b{keyword.lower()}\b', text):
results[keyword].append({
'author': comment['author'],
'text': comment['text'],
'likes': comment['likes']
})
return results
# Usage
keywords = ['amazing', 'tutorial', 'help', 'question']
mentions = find_keyword_mentions(comments, keywords)3. Spam Detection
def detect_spam(comments):
"""Identify potentially spam comments."""
spam_patterns = [
r'sub4sub',
r'check (out )?my channel',
r'free subscribers',
r'https?://', # Links
r'\$\d+', # Money mentions
r'dm me',
r'follow for follow'
]
potential_spam = []
for comment in comments:
text = comment['text'].lower()
for pattern in spam_patterns:
if re.search(pattern, text, re.IGNORECASE):
potential_spam.append({
'comment': comment,
'pattern_matched': pattern
})
break
return potential_spamRate Limiting Best Practices
Implementing Rate Limiting
import time
from collections import deque
class RateLimiter:
def __init__(self, max_requests, time_window):
self.max_requests = max_requests
self.time_window = time_window # seconds
self.requests = deque()
def wait_if_needed(self):
now = time.time()
# Remove old requests outside time window
while self.requests and self.requests[0] < now - self.time_window:
self.requests.popleft()
# Wait if at limit
if len(self.requests) >= self.max_requests:
sleep_time = self.requests[0] + self.time_window - now
if sleep_time > 0:
print(f"Rate limit reached. Sleeping {sleep_time:.2f}s...")
time.sleep(sleep_time)
self.requests.append(now)
# Usage: Max 100 requests per minute
limiter = RateLimiter(max_requests=100, time_window=60)
for video_id in video_ids:
limiter.wait_if_needed()
comments = get_comments(video_id, API_KEY)Frequently Asked Questions
commentThreads returns top-level comments with their replies bundled together—best for getting all comments on a video. comments returns individual comments and is used for getting additional replies or comment details by ID.maxResults=100, you can make 10,000 API calls per day. Each call returns up to 100 comments, so theoretically ~1,000,000 comments per day, though pagination overhead reduces this slightly.commentThreads endpoint returns only up to 5 replies per comment. For comments with more replies, use the comments.list endpoint with the parentId parameter and handle pagination.searchTerms parameter with commentThreads.list. However, it's often more efficient to retrieve all comments and search locally, especially if you're searching for multiple terms.Conclusion
Key points for successful implementation: always handle pagination, implement proper error handling and rate limiting, cache results to conserve quota, and use the most efficient parameters for your use case. Whether building an analytics dashboard, monitoring brand mentions, or conducting research, the API provides the foundation for YouTube comment automation.
Related Resources:
- YouTube Comment Extractor Guide
- Download YouTube Comments as JSON/CSV
- YouTube Comment Analysis Tools
Written By
The NoteLM team specializes in AI-powered video summarization and learning tools. We are passionate about making video content more accessible and efficient for learners worldwide.
Sources & References
Was this article helpful?