Skip to main content Direct link to Google Takeout? - CLI : r/DataHoarder
Go to DataHoarder
•

Direct link to Google Takeout? - CLI

Question?

Has anyone had any success downloading Takeout archives using CLI to download on a headless server? I've tried:

curl -JLO ultralongjavascriptgoogleurl

wget --content-disposition ultralongjavascriptgoogleurl

I've even tried logging in using brow.sh but have endless sign in issues. My archives are big and I want to download them on my server.

Archived post. New comments cannot be posted and votes cannot be cast.
Top-tier talent is waiting on Upwork. Find your unicorn problem solver from over 12M pros. Read reviews and feel confident when you hire.
Thumbnail image: Top-tier talent is waiting on Upwork. Find your unicorn problem solver from over 12M pros. Read reviews and feel confident when you hire.
Sort by:
Best
Open comment sort options
• • Edited

I was able to download takeout files on a headless server using wget without additional authentication. I found the solution here:
Steps:
- Initiate download via takeout page in your browser
- Go to the browser downloads page (ctrl+j for most browsers)
- Locate the download which is in-progress right now
- Right click + Copy link address
- Pause the download (be sure not to cancel the browser download before your wget finishes)
- From your terminal: wget "pasted_link_address"
Make sure to add the quotes around the pasted_link_address.

This was perfect! Thank you.

More replies

When I try this, I get a HTTP request sent, awaiting response... 400 Bad Request. Anyone else get this issue?

More replies

Worked perfectly! Thank you!

More replies

I was able to use a browser extension "cliget", which was able to grab the links after the takeout had been requested and you're looking at the popup to download the takeout.

But even then, there was this weird issue where when I tried to do more than one link simultaneously, it would cause the others to stop downloading. There is some weird validation going on with the Google side that I couldn't figure out.

If anyone has figured out a way to automated the google takeout process, I'd be interested too. Even GAM doesn't seem to have an option to start the takeout process. Although with a combination of GAM and GYB I was able to avoid using takeouts entirely.

Thank you for posting this! Older comment (2y), but even now I just tried it and it seems to work!

More replies
More replies

Google Chrome users - Use CurlWget extension to get a readymade command with required headers and cookies.

More replies
Check out the full AMA with Microsoft’s President of Collaborative Apps and Platforms.
Thumbnail image: Check out the full AMA with Microsoft’s President of Collaborative Apps and Platforms.

I had some difficulty. This is what I did.

Install "webtop" docker container, "ubuntu-mate" version, add a volume mapping to where you want to download to.

Open the webtop gui using a browser

Open firefox browser in webtop and change the download folder to the appropriate folder

Download the files directly within the docker web-browser

Initially I used a different version of webtop, but I found the web-browser kept crashing.

More replies

I've been using this for a minute, and it seems like today at least it's broken. Going to try it again and see if it's working a bit later as google workspace was having issues earlier so perhaps it's related but certainly borked atm.

More replies

Sadly doesn't work for me anymore :(

I tried these URLs/commands:

wget -S 'https://takeout.google.com/u/1/takeout/download?j=…-4ac7221806b0&i=0&user=1059…&rapt=AEjHL…-8R…4ls&authuser=1'

wget -S 'wget -S 'https://storage.googleapis.com/takeout-eu/20231…/-1…/1685f0ff-4a33-…4ac7221806b0/1/ed4cba61-1368-47e7…?GoogleAccessId=…-ur2cpmlse80ljkr2f7j…@developer.gserviceaccount.com&Expires=1702966301&Signature=fXjWmu/…x2b3FX729W6iiOeuzpV0fGI9PdSrd4x/…/RHp%2B0woDcpC7eBJ%2BW6uXGHqfM0aGJ7N5VMiVbY4hTjrfZo/v0b9JE%3D&authuser=1'

wget -S 'https://accounts.google.com/AccountChooser?continue=https://takeout.google.com/settings/takeout/download?…-bf6e-4ac7221806b0%26i%3D0&Email=a…@gmail.com'

In all cases, I only get an HTML file, instead of a 50 GB tgz file.