Aug 182013
 

This is a first draft of my thoughts on the Block Bot roadmap based on suggestions from lots of people on Atheismplus.com/forums / Twitter / comments and Tim Farley’s post here. When I have had feedback / revised I’ll post on www.theblockbot.com website.

I think Tim was incorrect in seeing the current Block Bot as the Block Bot on Twitter but in the future release section I have some suggestions for making it an implementation of a general purpose block list tool. Would like to get some feedback on the likelihood of anyone using this enhanced functionality as I am personally doubtful that it has much use beyond the current community list. Obviously to implement the larger changes will require some amount of effort, as any software engineer knows creating something like this green field is relatively easy but now we have people signed up to the current bot it becomes a lot harder to change it.

I will also be updating the website to make the FAQ, rules and opening page clearer which is one of Tims problems, also the full list of blockers will be added there, assuming they are happy with this. So onto the code changes -

 

Minor Release (TBB V2.01) 

This will be committed to github in the next couple of weeks and includes -

–> Logging who does what for audit to logs on server (Already does this but not on github)
–> Removal of block and report for spam (Already removed,  but not on github)
–> When #spam or #abuse is added, the tweet will say they have been blocked AND suggests people report the account to Twitter plus link

So the audit stuff is a little moot given it is only on my server so any queries would require trust in me anyway! A public audit would be a better idea and that is in the major release section below. Report for spam has become contentious, mainly I feel because those “discussing” it purposely ignore that it only applies to a small subset of #Level1 blocks, when #spam is added. Having a soundbite that “The Block Bot reports users for spam to get them suspended” … is far more powerful than the truth that only fake accounts and very abusive accounts have been reported in this way. However given, for one, that reporting for spam doesn’t get the really nasty accounts suspended I’m going to remove the functionality. The reason I know this is that I’ve reported a few test accounts for spam by all users – new accounts and ones that have made a few tweets – none got suspended (We have many hundreds of users now so this seems fairly conclusive). Finally as we open up the reporting to more blockers there is a possibility of error, Tim pointed to one such error as me “overruling” a blocker. In fact the person blocked was accidentally added to #Level1 rather than #Level3 … So to avoid any inevitable fall out of an account accidentally getting reported for spam and then suspended (Likely coincidentally, as discussed) I think its better to just remove the function and encourage our users to report for abuse which will be much more likely to get abusive accounts removed. (If you want to argue abusive accounts shouldn’t be removed and we are freezing their peach, take it up with Twitter as we’ll only encourage reporting, Twitter make the decision based on their ToS)

 

Major Release (TBB V2.1) … No timeline for implementation yet

–> All instances of a block report will be logged on a publicly accessible HTML page per blocked person

So when I tweet “@the_block_bot #block #level1 #spam +ool0n because he is an annoying git https://twitter.com/ool0n/status/368824861033910273” … This tweet will be saved as JSON in a report directory under my username …/web_root/blocks/ool0n/reports/blockers/<id>.json … You may note the “blockers” directory there, this is for a subsequent major release that will allow non-blockers to add reports of abuse/annoyance/bad jokes/whatever to the bot and have them recorded on that users public page as a report from an unauthorised person. Obviously it is unlikely that people in the list will be allowed to do this, at first, the proposed future release will allow them (If its implemented, need some feel on interest before I waste the effort). These JSON files will then be rendered on a report page for reference; …/web_root/blocks/ool0n/report.html

–> Multiple block reports are encouraged and will be logged for that block

We want multiple reports of abuse / annoyance etc to act as the evidence on why that person is in the block list. So this could include blog posts on why a particular person is a menace on Twitter, in our opinion. It can include picture links and tweet links for future reference. These will be rendered as a tweet on the report.html page but stored locally so they can never be lost / deleted when accounts die etc. This is important for the next item -

–> Blocks will be removed after a level dependent time period and the bot will tweet this has happened. 

So for example if there is no new report for a #Level3 person they will be removed after 3 months, obviously when the bot tweets this has happened it may cause a blocker to check them out and re-add them immediately. The time periods are not set but tentatively, 3 months – #Level3, 6 months – #Level2 and 12 months – #Level1… Although I’m not sure about any timeline for #Level1 blocks, it may be the case that infinite is appropriate.

–> Predefined hashes such as #terf #swerf #mra #ftbullies will be available and the blockers encouraged to use them

This functionality will be just to reference in the reports on why the person was blocked and what category of “asshole, anti-feminist or annoyance” the blocked person fits into. Initially this will just be in the JSON reports and not stored anywhere else. The intention for the next major release is to allow people more choice over what they block. So #Level1 #Level2 #Level3 will still be available to block all people added to those levels, but if you only want to block #mra tweeps added to those levels then there will be a category selection area where the user can customise their selections. This should allow people who are strongly anti-MRA but don’t want to accidentally block some people who happen to be mainly just annoying anti-A+/FTB tweeps, otherwise known as the #ftbullies (Stands for Fuck The Bullies, reference to those obsessing over A+ and FTB people). Actually there is a large crossover in MRA and anti-FTB/A+ categories but this will mean only those that are also identified as MRAs will be blocked. Obviously this is subjective and there will need to be an update to the appeal process as labelling people #MRA when they assert they are not will generate more complaints.

It’s likely these changes will be implemented in the live bot incrementally then released as a whole on github.

 

Major Release (TBB V3.0) … Future Bot!

–> On the sign up page there will be the ability to choose blocking criteria based on the predefined hashtags. 

As mentioned above the blockers will start to add #terf #mra etc when adding block reports, these will be used to define more granular criteria for blocking. So a user can choose #Level1 or #Level2 as usual but also deselect certain hashtags so if the person is blocked for one of these reasons then they are not blocked for that particular user.

–> Anyone can add to the block list.

There will be groups like the “Atheism Plus Blockers” or #TeamAtheismPlus for whom certain tweeps have access to add to the block list. This is currently the case … But with this function anyone can create a group (Might need admin assistance) then people can be added to that group as blockers by their own admins. They can add their own blocks into their own block list and users on the sign up page will be presented with a list of groups to subscribe to, with the ability to sign up to multiple block lists.

 

Of these last two changes the categories one would be relatively easy and will most likely be implemented. The second one is more aimed at Tim Farleys view of the Block Bot being THE Block Bot … Something I hope to avoid by people setting up their own services, although I acknowledge this isn’t as easy as I make it seem. There are costs of time and money in setting up a server to host a block list.

 

Misc Technical Changes

–> Move the AWS EC2 instance to the California data center. 

When I set up my EC2 instance it was just for my blog and playing around with, so I chose the cheapest Amazon data center to host it. However now it is calling the Twitter servers in California rather a lot it makes sense to site the Block Bot nearer their service. I expect this to make blocking faster, so we should be able to service more users as a result. Currently it can handle ~50 people signing up at once and in total ~2000 users, worst case is the code slows a little if it misses its 15 minute time window for blocking all the new blocks for each user.

–> Parallelise the blocking code 

Tim Farley mentioned this one and it really wouldn’t be hard as the current blocking code reads in the users details from files stored on the filesystem and blocks new blocks for each user (max 15 in 15 minutes) … To parallelise it would be a matter of starting multiple of these processes staggered over the current 15 minute window and reading a subset of users for each process. So for example have three “threads” each reading in a third of the users and running each at 15 minute intervals offset by 5 minutes. This would mean the bot could handle 6,000 users in total and ~150 users signing up at once at peak. Its doubtful it will ever get this big, but useful for it to be able to scale. No reason it couldn’t be set to have 15 “threads” offset by 1 minute and support 30,000 users, but at that point the EC2 instance would need more CPU power to run the bot! Also Twitter would likely start to get a little annoyed with us and start demanding money for a premium service….

Your thoughts welcome in the comments below… Your constructive thoughts as I will delete all crap!

(Sorry for the moderation but I don’t want to get this thread shitted up by whingers)

ETA: BLOCKERS — DM me if you are not happy to have your twitter id on www.theblockbot.com outing you as an authorised blocker. [eta] I think the version 2.1 cannot work with anonymous blockers so I’ll have to remove anyone not happy with this when we go ahead and implement.

——————————————————————————————————————————————————————–

EDIT: Addition to the block roadmap is to at some point implement this - https://twitter.com/check_blocks

“Having a conversation on Twitter and wondering if they are arguing in good faith? cc @check_blocks and it will tweet you back with any report pages for the people in the tweet. ”

–> Pretty easy to implement, just need to scan the people mentioned, or +<names> and tweet back a link to their report page, if there is one.

 Posted by at 8:52 pm

  8 Responses to “Outline Block Bot Roadmap …”

  1. This is a very ambitious plan, and I’m glad you published it. It’s good thing.

    And thanks again for always being very civil with me, I know you disagree with some of my criticisms, but I always felt you were being fair to me in your comments.

    • Cheers Tim. Don’t be fooled I’m often very uncivil and sarcastic, but I usually manage to reserve that for people not arguing in good faith or being rude. Pretty obvious you don’t fall into either category…

  2. That all seems sensible, oolon, though a lot of volunteer time on your part.

    As for a time limit on Level 1′s being released, I don’t think a time limit is the answer. In order for the kind of person who ends up on L1 to be reconsidered, if you ask me, they’d have to, at a minimum, contact you directly with a credible, sincere apology. Better yet, there should be evidence in their TLs or wherever that they’re walking the walk.

    • True, also for some of the L2 people from the Slymepit, I cannot see Tony Parsehole/@SpongyPissFlaps getting a clue anytime soon. Maybe need all #Level1 ones to be permanent and each report at #Level2/3 is x months cumulatively… Solves the problem of a one-off add in anger and they never do anything to be added again ending up on there forever with Tony and mates.

  3. I agree, all seem like sensible changes.

    As for the bot being used as THE bot, with the ability for other groups to set up their own shared lists with an admin and blockers of their own- that’s a very cool idea, but yeah, I’m not sure who would use it. I foresee anti-botters setting up their own communities to block current bot users & blockers. That may not NECESSARILY be a bad thing, but has potential dramas, perhaps not all of which I can even imagine at this point. But, again, I like that idea in general, especially if you want this to grow into THE block bot, not just the A+ block bot.

    Agree with all the minor release items. Increased accountability is always good, removing features which don’t work anyhow and only lead to sensational headlines is always good, I like the tags and I like the link to report for those who want to.

    Like the idea of having, essentially, a list of tweets that reported each blocked person, for all the obvious reasons. It might even be cool to do the same thing for blockers- see all the users they have reported and why. Might be good for looking at trends or in case there is a complaint about a particular blocker being too frivolous or whatnot. On that note- similar to the ability to opt out of blocking ppl on whatever level that are tagged with a certain hashtag, and also the time-limit thing- maybe the ability to not block anyone who hasn’t been reported in ____ months or something like that? But now I’m venturing into extra suggestions which wasn’t asked for, so I’ll stop there. “In a perfect world!” :)

    LOVE multiple block reports encouraged, especially if that will automatically save that tweet forever (if we include a link to the offending tweet, will it also save that tweet, in case the tweet gets deleted? Or will it only save the link, which will give an error once it’s deleted?). This way for offenders who have SOOOOO many horrible things to say and document, we don’t have to try to just pick the worst and/or squeeze it all in to one report tweet. And each different offending tweet can be tagged appropriately- perhaps some are MRA, some are anti-A+, etc.

    Time periods: sure for level 3. I’m pretty sure some of the people I’ve sent to level 3 were one-of jerks. Heck, I was before I knew better. It’s good to give those people a chance to have learned from their mistakes. Maybe three months later they won’t be a jerk anymore. I’ve been through this, so I get it. It might be rare, but it does happen. BUT- level 1, I agree with SpokesGay- there should really be more effort to remove them- they should WANT to be removed, ask for it personally, NOT be anonymous about it (not that we will publish their name, but so if they are vile in the future we know who they actually are), and show actual evidence of walking the walk to go with the talk. Submit it to you, and it’s kept so that it can be referred back to in case they change their mind and decide being a jerk is their true calling after all. Level 2 I’m on the fence about. I don’t think it should just be on a timer, but I also don’t think they necessarily need the big mea culpa of level 1s. Not sure what a good middle ground would be… Maybe if they want to be given another chance, they can be bumped to level one, and if no one reports them again, they’ll be off the list in 3 months? Or whatever works best.

    Love the predefined hashtags for now and future categories. With lots of input we can keep it not too long but also comprehensive enough to cover all the bases. I look forward to this.

    Everything- that which I understand sounds good to me!

    Also- if it hasn’t been done already, I vote a donation link be set up, so those who can and want to can make a donation- better yet, have the ability to set up a recurring donation. Say, $5 a month or something. I or one would love to help/support this with more than just a hearty recommendation to friends, and being technically limited, you’ll just have to shut up and take my money. If there’s already a donation link then MY BAD for not knowing about it yet. :)

    • Thanks for the feedback, all helps in deciding the roadmap!

      I also thought that criteria like length they’ve been in or number of times reported / number of different people added them, etc, etc, could all be blocking criteria… But given we have no complaints from users about this you have to wonder how many would really use these options!

      Will consider the donation link idea…

  4. All of this makes sense to me and seems like a good plan. I like the idea of adding hashtags and screen caps to show why a person is being added to the block list.

    I also agree with Spokesgay that perhaps there should be no time limit of a block on #level1

    I am slo fine with my name appearing on the list of blockers.

    Thanks,

    Sophia

 Leave a Reply

(required)

(required)

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>