Big Data is commonly thought of as a large data processing problem that is comprised of the three “V”s: velocity, variety and volume. If we think of Big Data workloads as just a parallel processing challenge or a traditional serial processing challenge we may not be looking at the big picture for Big Data. What kind of problem are we trying to solve and what kind of resources do we need to throw at the problem? I’ve put together a little chart to try and summarize the landscape.