Attempt towards tracker- and profiling-proof RSS
Go to file
Albert S. 3a723b9440 Added fetchers concept: seperate scripts to fetch the feeds
Fetchers claim to be a certain client. They try to send the same
headers as the original client. That's better than a simple curl request
with a fake user agent, because curl doesn't send the other headers like
the original client and therefore its traffic stands out.
2017-08-11 12:57:30 +02:00
fetchers Added fetchers concept: seperate scripts to fetch the feeds 2017-08-11 12:57:30 +02:00
README added paragraph on network containers 2017-04-16 21:00:34 +02:00
TODO added paragraph on network containers 2017-04-16 21:00:34 +02:00
fetcherslist Added fetchers concept: seperate scripts to fetch the feeds 2017-08-11 12:57:30 +02:00
randrss Added fetchers concept: seperate scripts to fetch the feeds 2017-08-11 12:57:30 +02:00

README

What is randrss?
================
Normal RSS readers usually fetch all feeds more or less at the same time
at constant intervals. This leaves a signature for anyone who observes
internet traffic. By looking for this pattern, you are more easily 
identified.

randrss fetches all your feeds at random intervals at an random order
over a certain period of time. The feeds will be downloaded and all
you need is a local webserver. The added benefits of this approach are
that you don't have to worry about how your client deals with cookies
etc.

Also, by having only one client fetching the feeds and your readers
pointed to the randrss's downloaded feeds, you avoid certain trackers
that may identify you across devices (google's feed proxy, cloudflare),
because it's very likely that the combination of feeds you read are 
unique. As your feeds are on a single server now, you can isolate your
RSS reader to its own network container so it can only contact that 
server. This is probably what you should do to be sure your client does
not contact the feed servers in any way. In Thunderbird, set 
browser.chrome.favicons to false.



randrss fetches the feed using Tor.

The only drawback of this approach is that the time you get new feeds
is delayed, but that should be acceptable.

Usage
=====
First, tweak the shellscript a bit, if you like
The input file has the following format per line:
url:output file:(optional parameter for the sleep time, in format x-y)

An optional user agent file contains the user agents we will randomly
use per feed. One user agent per line.

randrss [input file] [user agent file]