Texifter personnel travel next to Amsterdam to present and exhibit at the “Predictive Analytics Innovation Summit” on November 22 & 23, 2011. On Day 1, I give a talk titled “Humans and Machines Working Together,” that details our work on custom machine classifiers. Use cases will be drawn from HR analytics pilot projects involving JetBlue and ESPN internal survey data. On Day 2, I served as Chairperson and moderator. We look forward to sharing what we learn on this blog, and through our tool development, in the weeks and months to come.
Thanks to some excellent ground work by Joe Delfino and Sean Kelleher, Joe, Sean and I were able to make a pilgrimage to Google, Facebook and Reputation.com for a wildly exciting day of briefings with Q&A. While I’d love to share the details, I can’t! Big secret 😉 However, I can share a few pictures and stories from our day in Silicon Valley… Stu at Google – Take away message: “This was a great meeting!” Sean at Google – “I could move to California.” Joe at Google, after spending the week in the Bay Area attending the 2011 Sentiment Symposium, and the Text Analytics News Conference. “I am already (in my mind) living in California and running the west coast operation.” Stu and his well used Camaro. While running a bit behind schedule on the way to Reputation.com, it is alleged the driver took advantage of the fast moving California 101 freeway, the state’s liberal u-turn policy, certain optional passing strategies based on scenes from action and/or science fiction film, and his passengers stomachs. Joe at Facebook – Joe Delfino got us this meeting. Joe gets meetings. Joe is a meeting-getting animal. We like Joe. When my son saw this picture of his Dad at Facebook on Facebook, he said: “Wow Dad; you look really happy!” I sure was happy. We had come from Google feeling deeply engaged by one of the greatest companies in the history of capitalism and we were sitting in the lobby of another. We had lunch with a gracious host at the company cafeteria and a demo with a diverse group of Facebook sentiment analysts. After years of academic presentations, the freedom to present in jeans and a QDAP t-shirt was a perk that I could probably get used to. The meme ‘west coast office’ was heard frequently as we blazed out of Palo Alto and headed for Redwood City. After the long day in Silicon Valley, the team got stuck in 101 rush-hour traffic, slightly grouchy and despondent, but made it to a wonderful restaurant, Burma Superstar, in the Pacific Heights neighborhood for beer, food, and good company near a place where a Hobbit had been spied. By the time we had returned the Camaro and made it to the train to the SFO terminal for our red eye, we all realized the magnitude of the day we had. It was a huge lift for our confidence and an exciting glimpse into where Texifter is going. It is nearly certain that Texifter will be back on the West Coast soon.
DiscoverText is rolling-out an addition to its analytical toolkit: random sampling. The Web-service already offers an array of tools for text analytics and rigorous, team-based qualitative data analysis. These functions include the ability to code and annotate text, measure inter-rater reliability, adjudicate coder validity, attach memos to text, cluster duplicate and near-duplicate documents, share documents, and to classify text using an active-learning Naive-Bayesian classifier. While still in beta, random sampling is a key new addition. After DiscoverText users amass extraordinary amounts of social media data (for example via the Public Twitter API, the GNIP Powertrack, or the Facebook Social Graph), they can now more easily extract a random sample for analysis. The size of the sample is decided by the user in order to accommodate to iteration, experimentation and other scientific methods. The option is streamlined into the dataset creation process. On the new dataset creation page, you see a sample size prompt. This additional method for data prep and analysis augments current information retrieval techniques, such as search with advanced filtering. It also builds up our framework for expanding available NLP methods from straightforward Bayesian classification, which aims to analyze substantial quantities of data in their original bulk-form, to a menu of computationally intensive methods that can iterate more quickly and effectively against random data samples. For example, the LDA topic model tool we are releasing will be faster and more effective against smaller random samples. This new feature accommodates both an additional analytical approach as well as the opportunity to easily compare results between competing (or complimentary) analytic methods. We look forward to experimenting with this new tool and hearing about how random sampling will enhance the research of our users and users to come. Special Note to DT Users: We need to turn this feature on one account at a time while we are testing it. Drop us a line if you want to try the tool. We’ll keep you posted on the launch as more dataset modifications are pushed live. As always, if you have any questions, feel free to email us anytime at email@example.com. Your feedback is crucial. Sign up and try it out for yourself at discovertext.com.
We have been delighted with the response to our call for beta testers to try the GNIP-enabled PowerTrack for Twitter. You can still sign up. Round 1 of the beta test concludes on October 31, 2011. Even just testing the system’s data filtering and collecting capabilitiesfor 1 or 2 days, or as few as 1-2 hours, may convert you to a devoted GNIP via DiscoverText user. As part of taking beta tester applications, we asked folks to tell us something about how they planned to use the beta test opportunity. Thanks to ” Wordle” we can visualize an answer to the question: “Why do people want to take part in the GNIP beta test via DiscoverText?”
This is an 11-minute tutorial covering how you get started using the GNIP Power Track for Twitter (the “full firehose”) to capture large numbers of Tweets for analysis.