After nearly a month after the introduction of Google refresh of the Panda algorithm and the introduction of a new one, later dubbed Penguin, trying to make a comparison of the two is tough because they were introduced almost simultaneously. When a site gets hit hard by algorithm changes what one sees and what one understands can be different. Algorithms can be constructed out of different conditions with rules, rules about rules, and rules to trigger those conditions which, depending on the complexity, might need to be governed by other sets of rules to prevent conflicts with the entire structure that determines the desired search results. Some of the recent changes that were glaring and humorous are "tells" to other conditions in the updates and even separates them visually because they are always related.
For Panda, most opinions are that it's about weeding out content that comes from mills or ghost writers who sell to many sites and to neutralize all but the original source; in two words, duplicate content. A small scale example of this is when one site is the focus of an SEO campaign or just plain successful. It's often tempting to replicate the popular data and wrap a new site, or twenty, around it to create more relevant links than one can get naturally, or without effort. These artificial "satellites" have been hit hard as well as the sites that the data came from. The owners of content farms have cried foul but the reason behind that is simple, there are rules about rules as mentioned previously and it's always related; the value of all those "B" site links get devalued, or worse, along with the site itself which lowers the value, or worse, of site "A" immediately. Along with this were sites linked to using overdone optimization techniques including backlinks from anywhere. Directories and other sites that allow mass linking from them are usually not related and this is easily recognizable, links that will have value will be those from related or similar site along the same subject lines.
With Penguin it's seemingly more about quality in the order of sites listed on the whole especially when there seems to be more focus on expanding the variety of options of more generic queries; a change that may enhance the Knowledge Graph perhaps. Before the updates, search results WHEN NOT LOGGED IN to Google accounts were for the popular and popularized terms mixed with copycat sites using overpowering SEO loopholes like hundreds of links from no-content directories to increase the appearance of popularity and subsequently manipulating page rank. After the updates it became necessary to be more specific or look deeper into the results pages when typing in familiar short terms. The most obvious of the type of sites targeted were those with paid links, some massively distributed and receiving page rank in an effort to use that signal to gain "popularity" with Googlebot.
An example of these two updates combined can be seen easily in searches of abbreviations and acronyms. Where a search for "IDOC inmate search" will know the users is looking for the Illinois prison website, a search for the term IDOC is less specific with other uses and results reflect that with a better cross-section of sites using IDOC in other terms (niches to most readers) rather than the sites that previously bullied their way paid links which would produce results that lean toward topics about prison inmates and inmate searches. The comparison should be visible to someone focused on niche terms in both work and analysis.