Finally copied all my tweets to WordPress

Been wanting to move all my old tweets over to this site for a while. Ever since X stopped their free API access, been too lazy to process the data myself for re-import.

But with AI, this kind of data conversion tasks become much simpler. Just tell it what you want to do, give it a few samples of the data, it will work out the rest.

So here is what I did:

  • Exported full data dump from X
  • Then ran the python script below on tweets.js to get it into CSV format for import.
  • Get the free version of WP ALL IMPORT, then imported the full_text as the title and created_at as the creation date.

Python code to prepare the import CSV:

import json
import csv
from datetime import datetime

# Function to expand shortened URLs in the full_text of a tweet using the entities.urls object
def expand_urls(full_text, urls):
    for url_info in urls:
        short_url = url_info.get('url', '')
        expanded_url = url_info.get('expanded_url', '')
        if short_url and expanded_url:
            full_text = full_text.replace(short_url, expanded_url)
    return full_text

# Function to convert Twitter date format to WordPress compatible date format (YYYY-MM-DD HH:MM:SS)
def format_date(twitter_date_str):
    # Example input: 'Fri Jun 06 08:46:14 +0000 2025'
    dt = datetime.strptime(twitter_date_str, '%a %b %d %H:%M:%S %z %Y')
    # WordPress standard format: 'YYYY-MM-DD HH:MM:SS' (UTC time)
    return dt.strftime('%Y-%m-%d %H:%M:%S')

# read contents of tweets.js file using UTF 8 encoding
with open('tweets.js', 'r', encoding='utf-8') as file:
    data = file.read()

# remove the window.YTD.tweet.part0 = from the file
data = data[len("window.YTD.tweet.part0 = "):].strip()

# parse the data as json
tweets_data = json.loads(data)
tweets = [tweet["tweet"] for tweet in tweets_data]

# Expand URLs inside full_text and format date for each tweet
for tweet in tweets:
    full_text = tweet.get('full_text', '')
    urls = tweet.get('entities', {}).get('urls', [])
    expanded_text = expand_urls(full_text, urls)
    tweet['full_text'] = expanded_text

    created_at = tweet.get('created_at', '')
    if created_at:
        tweet['created_at'] = format_date(created_at)

# get all the unique fields from the tweets
field_names = set()
for tweet in tweets:
    field_names.update(set(tweet.keys()))

# Convert the tweet objects to CSV in UTF 8 format
with open('tweets202308.csv', 'w', newline='', encoding='utf-8') as csvfile:
    writer = csv.DictWriter(csvfile, fieldnames=field_names, quoting=csv.QUOTE_ALL)
    writer.writeheader()
    for tweet in tweets:
        writer.writerow(tweet)

print("Done")  # finished