Facebook recently launched Video Calling (only one-to-one) in partnership with Skype at http://www.facebook.com/videocalling. Yes, you guessed it right! This was the page URL of our fast growing Video Calling application that Facebook disabled, without any explanation ofcourse, on April 7th. Clearly this had nothing to do with any policy violation. Facebook wanted the URL for itself and went ahead and disabled an application demonstrating its one-upmanship attitude in dealing with situations.
When we tried to understand the reasons for this action, a generic email was sent which basically read this:
This app pre-fills user´s message and this is not allowed according to our Policies (point IV.2): “You must not pre-fill any of the fields associated with the following products, unless the user manually generated the content earlier in the workflow: Stream stories (user_message parameter for Facebook.streamPublish and FB.Connect.streamPublish, and message parameter for stream.publish), Photos (caption), Videos (description), Notes (title and content), Links (comment), and Jabber/XMPP.
We recommend you to fix this and re-launch again the app. Also, in order to avoid bad user´s feedback, we recommend you to monitor user reports and be sure to comply with all Facebook Principles and Policies (http://developers.facebook.com/policy/).
We obviously did not “pre fill” any fields without any user action. This explanation seemed more like “this is what we think, and you can do whatever you want”. Infact, we were so particular about privacy of our users that we shared the following image to demonstrate exactly how we use user information. We seriously think Facebook must have a way for apps to provide details of how they use information in the Permissions Dialog.
I haven’t seen any Facebook application sharing any details about information access and it’s use. If Facebook was really serious about cleaning up the App Spam, we wouldn’t have the flood-gates open for the bunch of perverted and “social video chat” applications.
- In sending Video Calling Invitations, we were not doing anything different from the following applications: Video Chat Rounds, Tiny Chat, vChatter.
- When we exchanged some emails with the Facebook representative, we highlighted the fact that we being singled out when there were bunch of other apps having much worse policy implementations.
- After three months of our reporting this, these apps are still functional. This demonstrates Facebook indeed had a vested interest in disabling our application.
- Why would Facebook recommend developers to relaunch their application at a *different* URL when the application is known to be spammy.]
Update 14-07-2011: Facebook has responded to our complain to GigaOM. ZDNet and AllFacebook are reporting that FB is denying what I wrote on this blog – not surprising. Simultaenously Techcrunch reports of a new version of TinyChat with an introduction video – amusing that this comes at such a time. If you look at the video (relevant snapshot below), it is quite evident that the user has not “typed” the message, but it has been sent by TinyChat based on a previous user action. Facebook also claims that our application was manually reviewed. This is an even worse situation. We had highlighted about other applications that should be disabled according to FB’s logic. However, they selectively disabled our application fully aware of the existence of other applications and allowing them to exist. The facts are here for people to judge, we are merely stating them.
A brief description about the Video Calling Application
The Video Calling application was powered by our real-time communication and collaboration platform, uniRow. It allowed group (upto four people) video chat with your Facebook and non-Facebook contacts and was purely browser-based. In addition, users could leave a video/voice message on their friends Wall or on their own Wall using this application. Unlike most social video chat applications that end up being a centre for perversion, we spent hours ensuring quality of people who liked the page. We actually monitored users and blocked those with objectionable profile pictures and details. Ofcourse we did not have a random chat option. In about 3 months we had 22K users, around 8.5K active and 4.5K likes. If I remember correctly, there were more than 120 ratings with an average of 4.7 out 5. There were users from all continents and we were scaling reasonably well using our globally distributed servers. I don’t think spammy applications have such a high like-to-active users ratio. Here is a snapshot of our FB application statistics page.
We had spent around $4000 for running ads on Facebook to promote the application. It is interesting to note that Facebook has a review mechanism for advertisement and therefore it would be only fair to assume that spammy applications would not be able to run their ads. So if we were in violation why was Facebook running (and encouraging us to run more) advertisements? Clearly something is not right.
What has happened here?
The question to ask is what happened around April 7th (exactly 3 months before the launch of Facebook-Skype Video Calling). It is clear that when the plan for rolling out their application was decided, Facebook wanted to use the phrase “Video Calling” and therefore wanted the URL. Instead of communicating this to the page (and application) owners, it went ahead and disabled the application. This is grossly undemocratic and probably illegal (we are looking into this aspect). We tried our best to get the application reinstated, but did not succeed.
We are asking the following questions:
- Can Facebook prove that we were violating any policy? If no, why wasn’t the application re-instated when requested?
- How did they determine that we violated their policy? Did they dig into the application because we had a URL they wanted and then went ahead to find a lame reason to disable the application?
- If we were violating policies, why did their advertisement “quality” check allow the application and the page? If their policy violation process is indeed automated, shouldn’t they detect violation from day one?
- What happens to the loss of business and the $$$ burnt in advertisement?
- If Facebook wanted to use the URL, we wouldn’t have a problem if they would have shared the intent. Disabling an application on false pretext is loss of reputation for an organization. How does Facebook plan to mend that? Probably it needs to mend its own reputation first!
It reiterates the practices and ethics of Facebook – that has been the discussion of everyone from movie producers to an average internet user. Facebook needs to remember that it’s phenomenal growth has been majorly promoted by the application developers who have built engaging applications. By mistreating the developer community it is making it’s own future difficult – especially when Google Plus is turning out to be a really “awesome” product.
As organizations and individuals, we are faced with challenging situations everyday. It requires courage and character to stand up for what is right, even when it means a bit of inconvenience in the short term. Facebook has failed to make the right choices on way too many occasions.
If you had the patience to read this post, I welcome your comments/suggestions. We are looking at future course of action to resolve this issue and any inputs around that will be great.
[PS: We are not going to post anything related to this on our corporate website, because we have already made the point on April 8th. We however think this issue needs to be highlighted in larger circles and Facebook’s misplaced policies and practices needs to be thoroughly discussed.]