
This is a sub that aims at bringing data hoarders together to share their passion with like minded people.
Direct link to Google Takeout? - CLI
Question?
Has anyone had any success downloading Takeout archives using CLI to download on a headless server? I've tried:
curl -JLO ultralongjavascriptgoogleurl
wget --content-disposition ultralongjavascriptgoogleurl
I've even tried logging in using brow.sh but have endless sign in issues. My archives are big and I want to download them on my server.
Archived post. New comments cannot be posted and votes cannot be cast.
Sort by:
Best
Open comment sort options
Best
Top
New
Controversial
Old
Q&A
I was able to download takeout files on a headless server using wget without additional authentication. I found the solution here:
Steps:
- Initiate download via takeout page in your browser
- Go to the browser downloads page (ctrl+j for most browsers)
- Locate the download which is in-progress right now
- Right click + Copy link address
- Pause the download (be sure not to cancel the browser download before your wget finishes)
- From your terminal:
wget "pasted_link_address"
Make sure to add the quotes around the pasted_link_address.
This was perfect! Thank you.
When I try this, I get a
HTTP request sent, awaiting response... 400 Bad Request
. Anyone else get this issue?Worked perfectly! Thank you!
I was able to use a browser extension "cliget", which was able to grab the links after the takeout had been requested and you're looking at the popup to download the takeout.
But even then, there was this weird issue where when I tried to do more than one link simultaneously, it would cause the others to stop downloading. There is some weird validation going on with the Google side that I couldn't figure out.
If anyone has figured out a way to automated the google takeout process, I'd be interested too. Even GAM doesn't seem to have an option to start the takeout process. Although with a combination of GAM and GYB I was able to avoid using takeouts entirely.
Thank you for posting this! Older comment (2y), but even now I just tried it and it seems to work!
Google Chrome users - Use CurlWget extension to get a readymade command with required headers and cookies.
I had some difficulty. This is what I did.
Install "webtop" docker container, "ubuntu-mate" version, add a volume mapping to where you want to download to.
Open the webtop gui using a browser
Open firefox browser in webtop and change the download folder to the appropriate folder
Download the files directly within the docker web-browser
Initially I used a different version of webtop, but I found the web-browser kept crashing.
I've been using this for a minute, and it seems like today at least it's broken. Going to try it again and see if it's working a bit later as google workspace was having issues earlier so perhaps it's related but certainly borked atm.
Sadly doesn't work for me anymore :(
I tried these URLs/commands:
In all cases, I only get an HTML file, instead of a 50 GB tgz file.