This is a first draft of my thoughts on the Block Bot roadmap based on suggestions from lots of people on Atheismplus.com/forums / Twitter / comments and Tim Farley’s post here. When I have had feedback / revised I’ll post on www.theblockbot.com website.
I think Tim was incorrect in seeing the current Block Bot as the Block Bot on Twitter but in the future release section I have some suggestions for making it an implementation of a general purpose block list tool. Would like to get some feedback on the likelihood of anyone using this enhanced functionality as I am personally doubtful that it has much use beyond the current community list. Obviously to implement the larger changes will require some amount of effort, as any software engineer knows creating something like this green field is relatively easy but now we have people signed up to the current bot it becomes a lot harder to change it.
I will also be updating the website to make the FAQ, rules and opening page clearer which is one of Tims problems, also the full list of blockers will be added there, assuming they are happy with this. So onto the code changes -
Minor Release (TBB V2.01)
This will be committed to github in the next couple of weeks and includes -
–> Logging who does what for audit to logs on server (Already does this but not on github)
–> Removal of block and report for spam (Already removed, but not on github)
–> When #spam or #abuse is added, the tweet will say they have been blocked AND suggests people report the account to Twitter plus link
So the audit stuff is a little moot given it is only on my server so any queries would require trust in me anyway! A public audit would be a better idea and that is in the major release section below. Report for spam has become contentious, mainly I feel because those “discussing” it purposely ignore that it only applies to a small subset of #Level1 blocks, when #spam is added. Having a soundbite that “The Block Bot reports users for spam to get them suspended” … is far more powerful than the truth that only fake accounts and very abusive accounts have been reported in this way. However given, for one, that reporting for spam doesn’t get the really nasty accounts suspended I’m going to remove the functionality. The reason I know this is that I’ve reported a few test accounts for spam by all users – new accounts and ones that have made a few tweets – none got suspended (We have many hundreds of users now so this seems fairly conclusive). Finally as we open up the reporting to more blockers there is a possibility of error, Tim pointed to one such error as me “overruling” a blocker. In fact the person blocked was accidentally added to #Level1 rather than #Level3 … So to avoid any inevitable fall out of an account accidentally getting reported for spam and then suspended (Likely coincidentally, as discussed) I think its better to just remove the function and encourage our users to report for abuse which will be much more likely to get abusive accounts removed. (If you want to argue abusive accounts shouldn’t be removed and we are freezing their peach, take it up with Twitter as we’ll only encourage reporting, Twitter make the decision based on their ToS)
Major Release (TBB V2.1) … No timeline for implementation yet
–> All instances of a block report will be logged on a publicly accessible HTML page per blocked person
So when I tweet “@the_block_bot #block #level1 #spam +ool0n because he is an annoying git https://twitter.com/ool0n/status/368824861033910273” … This tweet will be saved as JSON in a report directory under my username …/web_root/blocks/ool0n/reports/blockers/<id>.json … You may note the “blockers” directory there, this is for a subsequent major release that will allow non-blockers to add reports of abuse/annoyance/bad jokes/whatever to the bot and have them recorded on that users public page as a report from an unauthorised person. Obviously it is unlikely that people in the list will be allowed to do this, at first, the proposed future release will allow them (If its implemented, need some feel on interest before I waste the effort). These JSON files will then be rendered on a report page for reference; …/web_root/blocks/ool0n/report.html
–> Multiple block reports are encouraged and will be logged for that block
We want multiple reports of abuse / annoyance etc to act as the evidence on why that person is in the block list. So this could include blog posts on why a particular person is a menace on Twitter, in our opinion. It can include picture links and tweet links for future reference. These will be rendered as a tweet on the report.html page but stored locally so they can never be lost / deleted when accounts die etc. This is important for the next item -
–> Blocks will be removed after a level dependent time period and the bot will tweet this has happened.
So for example if there is no new report for a #Level3 person they will be removed after 3 months, obviously when the bot tweets this has happened it may cause a blocker to check them out and re-add them immediately. The time periods are not set but tentatively, 3 months – #Level3, 6 months – #Level2 and 12 months – #Level1… Although I’m not sure about any timeline for #Level1 blocks, it may be the case that infinite is appropriate.
–> Predefined hashes such as #terf #swerf #mra #ftbullies will be available and the blockers encouraged to use them
This functionality will be just to reference in the reports on why the person was blocked and what category of “asshole, anti-feminist or annoyance” the blocked person fits into. Initially this will just be in the JSON reports and not stored anywhere else. The intention for the next major release is to allow people more choice over what they block. So #Level1 #Level2 #Level3 will still be available to block all people added to those levels, but if you only want to block #mra tweeps added to those levels then there will be a category selection area where the user can customise their selections. This should allow people who are strongly anti-MRA but don’t want to accidentally block some people who happen to be mainly just annoying anti-A+/FTB tweeps, otherwise known as the #ftbullies (Stands for Fuck The Bullies, reference to those obsessing over A+ and FTB people). Actually there is a large crossover in MRA and anti-FTB/A+ categories but this will mean only those that are also identified as MRAs will be blocked. Obviously this is subjective and there will need to be an update to the appeal process as labelling people #MRA when they assert they are not will generate more complaints.
It’s likely these changes will be implemented in the live bot incrementally then released as a whole on github.
Major Release (TBB V3.0) … Future Bot!
–> On the sign up page there will be the ability to choose blocking criteria based on the predefined hashtags.
As mentioned above the blockers will start to add #terf #mra etc when adding block reports, these will be used to define more granular criteria for blocking. So a user can choose #Level1 or #Level2 as usual but also deselect certain hashtags so if the person is blocked for one of these reasons then they are not blocked for that particular user.
–> Anyone can add to the block list.
There will be groups like the “Atheism Plus Blockers” or #TeamAtheismPlus for whom certain tweeps have access to add to the block list. This is currently the case … But with this function anyone can create a group (Might need admin assistance) then people can be added to that group as blockers by their own admins. They can add their own blocks into their own block list and users on the sign up page will be presented with a list of groups to subscribe to, with the ability to sign up to multiple block lists.
Of these last two changes the categories one would be relatively easy and will most likely be implemented. The second one is more aimed at Tim Farleys view of the Block Bot being THE Block Bot … Something I hope to avoid by people setting up their own services, although I acknowledge this isn’t as easy as I make it seem. There are costs of time and money in setting up a server to host a block list.
Misc Technical Changes
–> Move the AWS EC2 instance to the California data center.
When I set up my EC2 instance it was just for my blog and playing around with, so I chose the cheapest Amazon data center to host it. However now it is calling the Twitter servers in California rather a lot it makes sense to site the Block Bot nearer their service. I expect this to make blocking faster, so we should be able to service more users as a result. Currently it can handle ~50 people signing up at once and in total ~2000 users, worst case is the code slows a little if it misses its 15 minute time window for blocking all the new blocks for each user.
–> Parallelise the blocking code
Tim Farley mentioned this one and it really wouldn’t be hard as the current blocking code reads in the users details from files stored on the filesystem and blocks new blocks for each user (max 15 in 15 minutes) … To parallelise it would be a matter of starting multiple of these processes staggered over the current 15 minute window and reading a subset of users for each process. So for example have three “threads” each reading in a third of the users and running each at 15 minute intervals offset by 5 minutes. This would mean the bot could handle 6,000 users in total and ~150 users signing up at once at peak. Its doubtful it will ever get this big, but useful for it to be able to scale. No reason it couldn’t be set to have 15 “threads” offset by 1 minute and support 30,000 users, but at that point the EC2 instance would need more CPU power to run the bot! Also Twitter would likely start to get a little annoyed with us and start demanding money for a premium service….
Your thoughts welcome in the comments below… Your constructive thoughts as I will delete all crap!
(Sorry for the moderation but I don’t want to get this thread shitted up by whingers)
ETA: BLOCKERS — DM me if you are not happy to have your twitter id on www.theblockbot.com outing you as an authorised blocker. [eta] I think the version 2.1 cannot work with anonymous blockers so I’ll have to remove anyone not happy with this when we go ahead and implement.
EDIT: Addition to the block roadmap is to at some point implement this - https://twitter.com/check_blocks
“Having a conversation on Twitter and wondering if they are arguing in good faith? cc @check_blocks and it will tweet you back with any report pages for the people in the tweet. ”
–> Pretty easy to implement, just need to scan the people mentioned, or +<names> and tweet back a link to their report page, if there is one.