Talksum and a Faster MongoDB

Speeding up existing parts of the data infrastructure is a big part of what we do here at Talksum. As part of that mission, we work with various data store technologies, notably including MongoDB. Among many 3rd party storage layers, MongoDB is one of our favorite NoSQL data stores – one to which we’ve worked to speed access times. Similarly, we’ve also focused on improving performance by teaming up with Fusion-io to integrate the Fusion-io flash memory platform into the Talksum appliances at the hardware level.

So we were very happy to present along with Fusion-io at the recent MongoSV conference. MongoSV is an annual one-day conference in Silicon Valley, CA, dedicated to the open source, non-relational database MongoDB. This year’s conference included over 50 sessions by 10gen engineers and MongoDB users from companies such as foursquare, Github, Apollo Group (University of Phoenix), AOL, and more.

For our presentation, Talksum’s CTO, Dale Russell, and Platform Architect, Brian Knox, were joined by Fusion-io Solution Architect, Matthew Kennedy. The presentation highlighted how Talksum’s architectural approach, combined with Fusion ioMemory technology, can solve some interesting customer challenges.

Specifically, we highlighted how Talksum’s data aggregation and filtering can simplify Netflow data processing and monitoring. We used MongoDB as the target storage layer, which took advantage of some of the optimization concepts from our platform along with the newest features of MongoDB.

More details can be seen in the presentation recording here, with our portion starting at slide 21:

http://www.10gen.com/presentations/mongiops-your-favorite-data-store-only-faster

Breaking the Silence: Real-Time Data Management and the Talksum Private Beta

Things have been quiet here on the Talksum blog for quite a while, but it certainly hasn’t been because of lack of activity behind the scenes. Since our last few posts, which have been limited to some technical notes about some of the projects we’ve contributed to, we have been hard at work on two major fronts.

First of all, we’ve been building our product and moving from our research prototype to our commercial offering. On that front, we’re very excited to announce that we’re well under way with the private beta program for our appliance-based Talksum Data Stream platform. We’ve been lucky enough to have the opportunity to work with several major enterprises in different verticals to prepare our product to be let loose in the wild, applied to real world data management and analytic problems. We’re working with Netflow data for optimization of data center topology and disaster recovery strategy. We’re enriching data in real time for optimized integration. We’ve been proving how our real-time data management and data reduction tools can shrink storage and decrease latency. As I said, we’ve been busy.

Secondly, we’ve been using these experiences to really hone our product vision. As we’ve talked to numerous companies and started working with our initial group of beta candidates, we’ve become increasingly passionate about the need for new approaches to “Big Data.”

We see lots of vendors working on how to do analytics on larger data sets, on how to spin enterprise offerings out of hugely distributed cloud approaches or how to re-tool the Hadoop ecosystem to handle lower latency solutions. However, the more we talk to big companies, the more we see that there’s an underlying need to re-assess larger data management practices. That’s where we see Talksum fitting in – we bring real time processing to the core of data management, data integration and data focused solution development.

We’ll be writing much more about Real Time Data Management and the benefits it brings the enterprise. For now, we’re happy to announce our private beta and to tell the world that we’re excited to be coming out of our stealthy period and into the fray!