It was a great joy to return to the University of Amsterdam and give this talk to my old friend Richard Rogers and his 100+ attentive workshop attendees.
Hurricane Sandy kept me from giving this talk titled “Fear & Loathing on the Social Campaign Trail” in San Francisco, so here is it via a Screencast.
Texifter is launching a second beta test period using “Power Track for Twitter” fire hose filtering a service provided by GNIP. We have streamlined the process of providing Enterprise class access to the beta test. This beta includes access to an expanding set of tools for archiving, filtering, coding, validating and machine classifying text. You can train a custom machine classifier in about 30 minutes. Sign up for the beta test here. The GNIP Power Track, in partnership with Twitter, provides users with unrestricted, real-time filtering of the Twitter fire hose. This enriched feature for DiscoverText provides a valuable analytical tool to our users. Not only will the GNIP Power Track provide use rs with access to the full stream of fire hose data, it will also provide Klout scores, language data, re-tweet frequency, geographic coordinates, and all #hashtags where available in the results. Taken together, this quantity of data and rich metadata fields will allow users to perform valuable social media analysis within DiscoverText. For more information: info@DiscoverText.com
DiscoverText is rolling-out an addition to its analytical toolkit: random sampling. The Web-service already offers an array of tools for text analytics and rigorous, team-based qualitative data analysis. These functions include the ability to code and annotate text, measure inter-rater reliability, adjudicate coder validity, attach memos to text, cluster duplicate and near-duplicate documents, share documents, and to classify text using an active-learning Naive-Bayesian classifier. While still in beta, random sampling is a key new addition. After DiscoverText users amass extraordinary amounts of social media data (for example via the Public Twitter API, the GNIP Powertrack, or the Facebook Social Graph), they can now more easily extract a random sample for analysis. The size of the sample is decided by the user in order to accommodate to iteration, experimentation and other scientific methods. The option is streamlined into the dataset creation process. On the new dataset creation page, you see a sample size prompt. This additional method for data prep and analysis augments current information retrieval techniques, such as search with advanced filtering. It also builds up our framework for expanding available NLP methods from straightforward Bayesian classification, which aims to analyze substantial quantities of data in their original bulk-form, to a menu of computationally intensive methods that can iterate more quickly and effectively against random data samples. For example, the LDA topic model tool we are releasing will be faster and more effective against smaller random samples. This new feature accommodates both an additional analytical approach as well as the opportunity to easily compare results between competing (or complimentary) analytic methods. We look forward to experimenting with this new tool and hearing about how random sampling will enhance the research of our users and users to come. Special Note to DT Users: We need to turn this feature on one account at a time while we are testing it. Drop us a line if you want to try the tool. We’ll keep you posted on the launch as more dataset modifications are pushed live. As always, if you have any questions, feel free to email us anytime at firstname.lastname@example.org. Your feedback is crucial. Sign up and try it out for yourself at discovertext.com.