On This Page

features16 min read~16 min left

YouTube Comments API: Get Comments Programmatically

Complete developer guide to the YouTube Comments API. Learn to retrieve comments, handle pagination, manage rate limits, and build applications that analyze YouTube engagement using the YouTube Data API v3.

By NoteLM TeamPublished 2026-01-07
Share:

Key Takeaways

  • YouTube Comments API is part of YouTube Data API v3
  • Free tier: 10,000 quota units per day (no credit card required)
  • Main endpoints: commentThreads.list and comments.list
  • Each API call costs 1 quota unit regardless of results returned
  • Pagination required for videos with >100 comments
  • OAuth 2.0 required for posting/modifying comments; API key sufficient for reading

The YouTube Comments API is part of YouTube Data API v3, allowing developers to programmatically retrieve, post, and manage comments on YouTube videos. To get comments, use the commentThreads.list endpoint with a video ID and API key. The free tier provides 10,000 quota units daily—enough to retrieve approximately 1 million comments per day with efficient coding.

Key Takeaways

  • YouTube Comments API is part of YouTube Data API v3
  • Free tier: 10,000 quota units per day (no credit card required)
  • Main endpoints: commentThreads.list and comments.list
  • Each API call costs 1 quota unit regardless of results returned
  • Pagination required for videos with >100 comments
  • OAuth 2.0 required for posting/modifying comments; API key sufficient for reading

Getting Started

Step 1: Create a Google Cloud Project

  1. 1.Go to Google Cloud Console
  2. 2.Click "Create Project"
  3. 3.Name your project (e.g., "YouTube Comment Analysis")
  4. 4.Click "Create"

Step 2: Enable YouTube Data API v3

  1. 1.In Cloud Console, go to "APIs & Services" > "Library"
  2. 2.Search for "YouTube Data API v3"
  3. 3.Click the API, then click "Enable"

Step 3: Create API Credentials

For read-only access (retrieving comments):

  1. 1.Go to "APIs & Services" > "Credentials"
  2. 2.Click "Create Credentials" > "API Key"
  3. 3.Copy and securely store your API key
  4. 4.(Optional) Restrict key to YouTube Data API v3 only

For write access (posting comments):

  1. 1.Create OAuth 2.0 Client ID instead
  2. 2.Configure consent screen
  3. 3.Download client secrets JSON
  4. 4.Implement OAuth flow in your application

API Endpoints Overview

CommentThreads Endpoint

Retrieves top-level comments and optionally their replies.

GET https://www.googleapis.com/youtube/v3/commentThreads

Use for:

  • Getting all comments on a video
  • Retrieving comments with their reply threads
  • Sorting by relevance (top comments) or time

Comments Endpoint

Retrieves individual comments or replies to a specific comment.

GET https://www.googleapis.com/youtube/v3/comments

Use for:

  • Getting replies to a specific comment
  • Retrieving comment details by ID
  • Updating or deleting comments (with OAuth)

Retrieving Comments: Code Examples

Python: Basic Comment Retrieval

import requests

API_KEY = 'YOUR_API_KEY'
VIDEO_ID = 'dQw4w9WgXcQ'

def get_comments(video_id, api_key, max_results=100):
    """Retrieve comments from a YouTube video."""
    
    url = 'https://www.googleapis.com/youtube/v3/commentThreads'
    params = {
        'part': 'snippet,replies',
        'videoId': video_id,
        'key': api_key,
        'maxResults': max_results,
        'order': 'relevance'  # or 'time' for newest first
    }
    
    comments = []
    
    while True:
        response = requests.get(url, params=params)
        data = response.json()
        
        if 'error' in data:
            print(f"Error: {data['error']['message']}")
            break
        
        for item in data.get('items', []):
            # Get top-level comment
            top_comment = item['snippet']['topLevelComment']['snippet']
            comment_data = {
                'id': item['id'],
                'author': top_comment['authorDisplayName'],
                'author_channel_id': top_comment.get('authorChannelId', {}).get('value', ''),
                'text': top_comment['textDisplay'],
                'likes': top_comment['likeCount'],
                'published_at': top_comment['publishedAt'],
                'updated_at': top_comment['updatedAt'],
                'reply_count': item['snippet']['totalReplyCount'],
                'replies': []
            }
            
            # Get replies if available
            if 'replies' in item:
                for reply in item['replies']['comments']:
                    reply_snippet = reply['snippet']
                    comment_data['replies'].append({
                        'id': reply['id'],
                        'author': reply_snippet['authorDisplayName'],
                        'text': reply_snippet['textDisplay'],
                        'likes': reply_snippet['likeCount'],
                        'published_at': reply_snippet['publishedAt']
                    })
            
            comments.append(comment_data)
        
        # Check for more pages
        if 'nextPageToken' in data:
            params['pageToken'] = data['nextPageToken']
        else:
            break
    
    return comments

# Usage
comments = get_comments(VIDEO_ID, API_KEY)
print(f"Retrieved {len(comments)} comment threads")

Python: Export to JSON/CSV

import json
import csv

def export_to_json(comments, filename):
    """Export comments to JSON file."""
    with open(filename, 'w', encoding='utf-8') as f:
        json.dump(comments, f, indent=2, ensure_ascii=False)

def export_to_csv(comments, filename):
    """Export comments to CSV file."""
    with open(filename, 'w', newline='', encoding='utf-8') as f:
        writer = csv.writer(f)
        writer.writerow(['id', 'author', 'text', 'likes', 'published_at', 'reply_count'])
        
        for comment in comments:
            writer.writerow([
                comment['id'],
                comment['author'],
                comment['text'],
                comment['likes'],
                comment['published_at'],
                comment['reply_count']
            ])

# Usage
comments = get_comments(VIDEO_ID, API_KEY)
export_to_json(comments, 'comments.json')
export_to_csv(comments, 'comments.csv')

JavaScript: Node.js Implementation

const axios = require('axios');

const API_KEY = 'YOUR_API_KEY';
const VIDEO_ID = 'dQw4w9WgXcQ';

async function getComments(videoId, apiKey, maxResults = 100) {
    const baseUrl = 'https://www.googleapis.com/youtube/v3/commentThreads';
    const comments = [];
    let pageToken = null;
    
    do {
        const params = {
            part: 'snippet,replies',
            videoId: videoId,
            key: apiKey,
            maxResults: maxResults,
            order: 'relevance'
        };
        
        if (pageToken) {
            params.pageToken = pageToken;
        }
        
        try {
            const response = await axios.get(baseUrl, { params });
            const data = response.data;
            
            for (const item of data.items || []) {
                const topComment = item.snippet.topLevelComment.snippet;
                const commentData = {
                    id: item.id,
                    author: topComment.authorDisplayName,
                    text: topComment.textDisplay,
                    likes: topComment.likeCount,
                    publishedAt: topComment.publishedAt,
                    replyCount: item.snippet.totalReplyCount,
                    replies: []
                };
                
                if (item.replies) {
                    for (const reply of item.replies.comments) {
                        commentData.replies.push({
                            id: reply.id,
                            author: reply.snippet.authorDisplayName,
                            text: reply.snippet.textDisplay,
                            likes: reply.snippet.likeCount
                        });
                    }
                }
                
                comments.push(commentData);
            }
            
            pageToken = data.nextPageToken;
            
        } catch (error) {
            console.error('Error:', error.response?.data?.error?.message || error.message);
            break;
        }
        
    } while (pageToken);
    
    return comments;
}

// Usage
(async () => {
    const comments = await getComments(VIDEO_ID, API_KEY);
    console.log(`Retrieved ${comments.length} comment threads`);
})();

API Parameters Reference

commentThreads.list Parameters

ParameterRequiredDescription
partYesData to return: snippet, replies, id
videoIdConditionalVideo to get comments from
channelIdConditionalGet all comments on a channel
idConditionalSpecific comment thread IDs
keyYesYour API key
maxResultsNo1-100, default 20
pageTokenNoToken for pagination
orderNorelevance or time
searchTermsNoFilter by search terms

Response Fields

{
  "kind": "youtube#commentThreadListResponse",
  "pageInfo": {
    "totalResults": 1000,
    "resultsPerPage": 100
  },
  "nextPageToken": "QURTSl...",
  "items": [
    {
      "kind": "youtube#commentThread",
      "id": "UgyxQmZp...",
      "snippet": {
        "channelId": "UC...",
        "videoId": "dQw4...",
        "topLevelComment": {
          "snippet": {
            "authorDisplayName": "@Username",
            "authorProfileImageUrl": "https://...",
            "authorChannelUrl": "http://youtube.com/...",
            "authorChannelId": { "value": "UC..." },
            "textDisplay": "Comment text here",
            "textOriginal": "Comment text here",
            "likeCount": 42,
            "publishedAt": "2026-01-07T10:30:00Z",
            "updatedAt": "2026-01-07T10:30:00Z"
          }
        },
        "totalReplyCount": 5
      },
      "replies": {
        "comments": [...]
      }
    }
  ]
}

Quota Management

Understanding Quota Costs

OperationQuota Cost
commentThreads.list1 unit
comments.list1 unit
comments.insert50 units
comments.update50 units
comments.delete50 units
comments.markAsSpam50 units

Daily Quota Limits

TierDaily QuotaApprox. Comment Reads
Free10,000 units~1,000,000 comments
Standard1,000,000 units~100,000,000 comments

Optimizing Quota Usage

1. Maximize results per request:

params['maxResults'] = 100  # Always use maximum

2. Only request needed parts:

# If you don't need replies
params['part'] = 'snippet'  # Instead of 'snippet,replies'

3. Cache results:

import json
from datetime import datetime

def cache_comments(comments, video_id):
    cache_data = {
        'video_id': video_id,
        'cached_at': datetime.now().isoformat(),
        'comments': comments
    }
    with open(f'cache_{video_id}.json', 'w') as f:
        json.dump(cache_data, f)

def load_cached_comments(video_id, max_age_hours=24):
    try:
        with open(f'cache_{video_id}.json', 'r') as f:
            cache_data = json.load(f)
        cached_time = datetime.fromisoformat(cache_data['cached_at'])
        if (datetime.now() - cached_time).total_seconds() < max_age_hours * 3600:
            return cache_data['comments']
    except FileNotFoundError:
        pass
    return None

4. Implement exponential backoff:

import time

def get_comments_with_retry(video_id, api_key, max_retries=5):
    for attempt in range(max_retries):
        try:
            return get_comments(video_id, api_key)
        except Exception as e:
            if '403' in str(e):  # Quota exceeded
                wait_time = (2 ** attempt) * 60  # Exponential backoff
                print(f"Quota exceeded. Waiting {wait_time}s...")
                time.sleep(wait_time)
            else:
                raise
    raise Exception("Max retries exceeded")

Handling Pagination

Basic Pagination

def get_all_comments(video_id, api_key):
    all_comments = []
    page_token = None
    page_count = 0
    
    while True:
        params = {
            'part': 'snippet,replies',
            'videoId': video_id,
            'key': api_key,
            'maxResults': 100
        }
        
        if page_token:
            params['pageToken'] = page_token
        
        response = requests.get(
            'https://www.googleapis.com/youtube/v3/commentThreads',
            params=params
        )
        data = response.json()
        
        all_comments.extend(data.get('items', []))
        page_count += 1
        
        print(f"Page {page_count}: Retrieved {len(data.get('items', []))} comments")
        
        page_token = data.get('nextPageToken')
        if not page_token:
            break
    
    print(f"Total: {len(all_comments)} comment threads from {page_count} pages")
    return all_comments

Getting All Replies

By default, commentThreads only returns up to 5 replies per comment. For comments with more replies:

def get_all_replies(parent_id, api_key):
    """Get all replies to a specific comment."""
    all_replies = []
    page_token = None
    
    while True:
        params = {
            'part': 'snippet',
            'parentId': parent_id,
            'key': api_key,
            'maxResults': 100
        }
        
        if page_token:
            params['pageToken'] = page_token
        
        response = requests.get(
            'https://www.googleapis.com/youtube/v3/comments',
            params=params
        )
        data = response.json()
        
        all_replies.extend(data.get('items', []))
        
        page_token = data.get('nextPageToken')
        if not page_token:
            break
    
    return all_replies

# Usage: For comments with totalReplyCount > 5
for comment in comments:
    if comment['snippet']['totalReplyCount'] > 5:
        all_replies = get_all_replies(comment['id'], API_KEY)
        print(f"Comment {comment['id']} has {len(all_replies)} replies")

Error Handling

Common Error Codes

CodeErrorSolution
400Bad RequestCheck parameter format
403ForbiddenQuota exceeded or API not enabled
403commentsDisabledVideo has comments disabled
404Not FoundInvalid video ID
500Internal ErrorYouTube server issue, retry

Robust Error Handling

import requests
from time import sleep

class YouTubeAPIError(Exception):
    pass

def get_comments_robust(video_id, api_key):
    url = 'https://www.googleapis.com/youtube/v3/commentThreads'
    params = {
        'part': 'snippet',
        'videoId': video_id,
        'key': api_key,
        'maxResults': 100
    }
    
    try:
        response = requests.get(url, params=params, timeout=30)
        response.raise_for_status()
        data = response.json()
        
        if 'error' in data:
            error = data['error']
            code = error.get('code')
            message = error.get('message', 'Unknown error')
            
            if code == 403:
                if 'commentsDisabled' in message:
                    print(f"Comments are disabled for video {video_id}")
                    return []
                elif 'quotaExceeded' in message:
                    raise YouTubeAPIError("Daily quota exceeded")
            
            raise YouTubeAPIError(f"API Error {code}: {message}")
        
        return data.get('items', [])
        
    except requests.exceptions.Timeout:
        raise YouTubeAPIError("Request timed out")
    except requests.exceptions.RequestException as e:
        raise YouTubeAPIError(f"Request failed: {str(e)}")

Use Cases and Examples

1. Comment Analytics Dashboard

from collections import Counter
from datetime import datetime

def analyze_comments(comments):
    """Generate analytics from comment data."""
    
    analysis = {
        'total_comments': len(comments),
        'total_likes': sum(c['likes'] for c in comments),
        'total_replies': sum(c['reply_count'] for c in comments),
        'unique_authors': len(set(c['author'] for c in comments)),
        'avg_likes': 0,
        'top_commenters': [],
        'comments_by_date': {}
    }
    
    if comments:
        analysis['avg_likes'] = analysis['total_likes'] / len(comments)
    
    # Top commenters
    author_counts = Counter(c['author'] for c in comments)
    analysis['top_commenters'] = author_counts.most_common(10)
    
    # Comments by date
    for comment in comments:
        date = comment['published_at'][:10]  # YYYY-MM-DD
        analysis['comments_by_date'][date] = analysis['comments_by_date'].get(date, 0) + 1
    
    return analysis

2. Keyword Monitoring

import re

def find_keyword_mentions(comments, keywords):
    """Find comments mentioning specific keywords."""
    
    results = {kw: [] for kw in keywords}
    
    for comment in comments:
        text = comment['text'].lower()
        for keyword in keywords:
            if re.search(rf'\b{keyword.lower()}\b', text):
                results[keyword].append({
                    'author': comment['author'],
                    'text': comment['text'],
                    'likes': comment['likes']
                })
    
    return results

# Usage
keywords = ['amazing', 'tutorial', 'help', 'question']
mentions = find_keyword_mentions(comments, keywords)

3. Spam Detection

def detect_spam(comments):
    """Identify potentially spam comments."""
    
    spam_patterns = [
        r'sub4sub',
        r'check (out )?my channel',
        r'free subscribers',
        r'https?://',  # Links
        r'\$\d+',  # Money mentions
        r'dm me',
        r'follow for follow'
    ]
    
    potential_spam = []
    
    for comment in comments:
        text = comment['text'].lower()
        for pattern in spam_patterns:
            if re.search(pattern, text, re.IGNORECASE):
                potential_spam.append({
                    'comment': comment,
                    'pattern_matched': pattern
                })
                break
    
    return potential_spam

Rate Limiting Best Practices

Implementing Rate Limiting

import time
from collections import deque

class RateLimiter:
    def __init__(self, max_requests, time_window):
        self.max_requests = max_requests
        self.time_window = time_window  # seconds
        self.requests = deque()
    
    def wait_if_needed(self):
        now = time.time()
        
        # Remove old requests outside time window
        while self.requests and self.requests[0] < now - self.time_window:
            self.requests.popleft()
        
        # Wait if at limit
        if len(self.requests) >= self.max_requests:
            sleep_time = self.requests[0] + self.time_window - now
            if sleep_time > 0:
                print(f"Rate limit reached. Sleeping {sleep_time:.2f}s...")
                time.sleep(sleep_time)
        
        self.requests.append(now)

# Usage: Max 100 requests per minute
limiter = RateLimiter(max_requests=100, time_window=60)

for video_id in video_ids:
    limiter.wait_if_needed()
    comments = get_comments(video_id, API_KEY)

Frequently Asked Questions

Q1How do I get a YouTube Comments API key?
Create a Google Cloud project at console.cloud.google.com, enable YouTube Data API v3, then create an API key under Credentials. The free tier provides 10,000 quota units daily—no credit card required.
Q2What's the difference between commentThreads and comments endpoints?
commentThreads returns top-level comments with their replies bundled together—best for getting all comments on a video. comments returns individual comments and is used for getting additional replies or comment details by ID.
Q3How many comments can I retrieve per day?
With the free tier (10,000 quota units) and maxResults=100, you can make 10,000 API calls per day. Each call returns up to 100 comments, so theoretically ~1,000,000 comments per day, though pagination overhead reduces this slightly.
Q4Can I post comments via the API?
Yes, but it requires OAuth 2.0 authentication (not just an API key). The user must authorize your application, and posting costs 50 quota units per comment.
Q5How do I get ALL replies to a comment?
The commentThreads endpoint returns only up to 5 replies per comment. For comments with more replies, use the comments.list endpoint with the parentId parameter and handle pagination.
Q6Why am I getting a 403 error?
Common causes: quota exceeded (wait until quota resets at midnight Pacific), API not enabled for your project, or comments are disabled on the video. Check the error message for specific details.
Q7Can I search comments for specific keywords via API?
Yes, use the searchTerms parameter with commentThreads.list. However, it's often more efficient to retrieve all comments and search locally, especially if you're searching for multiple terms.
Q8How do I handle videos with millions of comments?
Use pagination, caching, and consider sampling. For analytics, a representative sample (10,000-100,000 comments) often provides statistically valid insights without processing millions of comments.

Conclusion

The YouTube Comments API provides powerful programmatic access to video comments for analysis, monitoring, and application development. With 10,000 free quota units daily, you can retrieve comments from hundreds of videos or analyze millions of comments on popular videos.

Key points for successful implementation: always handle pagination, implement proper error handling and rate limiting, cache results to conserve quota, and use the most efficient parameters for your use case. Whether building an analytics dashboard, monitoring brand mentions, or conducting research, the API provides the foundation for YouTube comment automation.

Related Resources:

  • YouTube Comment Extractor Guide
  • Download YouTube Comments as JSON/CSV
  • YouTube Comment Analysis Tools

Written By

NoteLM Team

The NoteLM team specializes in AI-powered video summarization and learning tools. We are passionate about making video content more accessible and efficient for learners worldwide.

AI/ML DevelopmentVideo ProcessingEducational Technology
Last verified: January 7, 2026
API quotas and features may change. Always refer to official YouTube API documentation for current specifications. Code examples are for educational purposes.

Was this article helpful?