This is the latest DiscoverText filtering feature designed to speed up the creation of accurate custom machine classifiers. This video shows how we use an interactive display of classifier scores to isolate items in a dataset that require further human coding to improve the accuracy of the classifier. Click on the screenshot below to start the video.
This is Version 1 of the 60-second Texifter elevator pitch . Feedback and questions are truly welcome. Just email firstname.lastname@example.org . ~Thanks!
We asked folks signing up for the 2nd GNIP beta using DiscoverText why they were doing it. Here is a nice Wordle showing some of the common themes: We also asked for job titles. No surprise the professors lead the w ay. The sign up remains open. Jump in and let us know if you like our Enterprise solution for social media analytics.
DiscoverText is rolling-out an addition to its analytical toolkit: random sampling. The Web-service already offers an array of tools for text analytics and rigorous, team-based qualitative data analysis. These functions include the ability to code and annotate text, measure inter-rater reliability, adjudicate coder validity, attach memos to text, cluster duplicate and near-duplicate documents, share documents, and to classify text using an active-learning Naive-Bayesian classifier. While still in beta, random sampling is a key new addition. After DiscoverText users amass extraordinary amounts of social media data (for example via the Public Twitter API, the GNIP Powertrack, or the Facebook Social Graph), they can now more easily extract a random sample for analysis. The size of the sample is decided by the user in order to accommodate to iteration, experimentation and other scientific methods. The option is streamlined into the dataset creation process. On the new dataset creation page, you see a sample size prompt. This additional method for data prep and analysis augments current information retrieval techniques, such as search with advanced filtering. It also builds up our framework for expanding available NLP methods from straightforward Bayesian classification, which aims to analyze substantial quantities of data in their original bulk-form, to a menu of computationally intensive methods that can iterate more quickly and effectively against random data samples. For example, the LDA topic model tool we are releasing will be faster and more effective against smaller random samples. This new feature accommodates both an additional analytical approach as well as the opportunity to easily compare results between competing (or complimentary) analytic methods. We look forward to experimenting with this new tool and hearing about how random sampling will enhance the research of our users and users to come. Special Note to DT Users: We need to turn this feature on one account at a time while we are testing it. Drop us a line if you want to try the tool. We’ll keep you posted on the launch as more dataset modifications are pushed live. As always, if you have any questions, feel free to email us anytime at email@example.com. Your feedback is crucial. Sign up and try it out for yourself at discovertext.com.