We highly appreciate user feedback for continuous improvement.
With hundreds of Bite exercises and thousands of reviews, it’s easy to get overwhelmed by the data. 😱
How do you uncover insights from this sea of feedback? Use code! 😎
Enter TextBlob, a Python library that abstracts away the complexities of Natural Language Processing (NLP).
This article walks you through a practical example: analyzing the sentiment of Bite reviews to identify which exercises need the most immediate attention and which ones delight people the most (perhaps to highlight these more on social media …)
Using TextBlob alongside SQLAlchemy’s automap feature, we’ll show you how to make sense of user sentiment with relatively little effort. 🚀
Problem Statement
Managing user feedback is challenging when dealing with:
- High volumes of unstructured text data.
- The need to quickly identify areas requiring improvement.
- Limited time / resources to implement a sophisticated NLP pipeline.
Leveraging TextBlob to perform sentiment analysis on Bite reviews, we quickly got actionable insights into which exercises have the most positive vs. negative feedback.
Setup
To get started:
- Clone the repository.
- Run
make setup
to create a virtual environment + install dependencies using uv. - Ensure your database connection URL is set in an
.env
file using theDATABASE_URL
variable, e.g.DATABASE_URL=postgresql://postgres:password@0.0.0.0:5432/reviews
- To create this database and add some fake data, run
make db
– this will execute the includeddata.sql
file.
$ git clone https://github.com/bbelderbos/nlp-bite-feedback
...
$ cd nlp-bite-feedback
$ make setup
uv sync
Using CPython 3.13.0
Creating virtual environment at: .venv
...
+ textblob==0.19.0
+ tqdm==4.67.1
+ typing-extensions==4.12.2
$ echo "DATABASE_URL=postgresql://postgres:password@0.0.0.0:5432/reviews" > .env
$ make db
createdb reviews && psql -U postgres -d reviews -f data.sql
CREATE TABLE
INSERT 0 6
New to Makefiles, check out our article or YouTube video:
The Script in Action
How It Works
The script:
- Queries the database for Bite reviews using SQLAlchemy’s automap.
- Uses TextBlob to calculate the sentiment polarity for each review.
- Aggregates the results to display:
- Average sentiment scores.
- The number of comments per Bite.
- Detailed reviews for a specific Bite upon request.
Example Output
Running the script without arguments displays the sentiment scores for all Bites – it’s clear that Bite # 276 needs attention:
$ uv run python script.py
bite id | # comments | avg sentiment score
276 | 2 | -0.2375
142 | 2 | 0.53125
229 | 2 | 0.83
To view the reviews for a specific Bite, pass its ID as an argument:
$ uv run python script.py 229
0.75 | Nice Bite! Learned (once again) to always proof-read my code.
0.91 | I have always struggled with loops, so this was very good practice.
$ uv run python script.py 276
-0.25 | It was only difficult because I forgot why we were defining.
-0.23 | Not sure if I am missing something on this one.
Key Concepts
Sentiment Analysis with TextBlob
TextBlob makes sentiment analysis effortless:
- Polarity: Measures how positive (1.0) or negative (-1.0) a comment is.
- Subjectivity: Determines how subjective (opinion-based) or objective a comment is.
For example:
from textblob import TextBlob
comment = "This Bite was incredibly helpful and fun!"
sentiment = TextBlob(comment).sentiment
print(sentiment.polarity) # 0.8 (positive)
print(sentiment.subjectivity) # 0.75 (fairly subjective)
SQLAlchemy Automap
SQLAlchemy’s automap dynamically maps database tables to Python objects:
from sqlalchemy.ext.automap import automap_base
Base = automap_base()
Base.prepare(engine, reflect=True)
REVIEW_TABLE = Base.classes.bites_biteconsumer
This eliminates the need to define models manually as you would typically do with ORMs. As per the SQLAlchemy Automap docs:
Define an extension to the
sqlalchemy.ext.declarative
system which automatically generates mapped classes and relationships from a database schema, typically though not necessarily one which is reflected. –
Here is another example where I used this:
Take-aways
- Fast Results: TextBlob abstracts NLP complexities, enabling rapid prototyping.
- Actionable Insights: Sentiment analysis highlights areas needing improvement.
- Scalable: Combine with dashboards or alerts for real-time insights into user sentiment.
By leveraging simple but powerful tools, you can uncover patterns in user feedback and continuously improve your platform.
Taking it a Step Further
While TextBlob provides a quick and effective way to analyze sentiment, there’s room for refinement.
One improvement could be fine-tuning sentiment thresholds—for example, adjusting how strongly negative reviews are flagged based on historical trends or specific keywords.
Additionally, for more nuanced sentiment analysis, you could incorporate an AI-based model (e.g., OpenAI’s API or a fine-tuned Hugging Face model).
These models can better handle sarcasm, context, and domain-specific language, making sentiment classification even more accurate.
Actually let’s put this to the test. Here is a quick script to use Marvin AI to do a sentiment analysis given a text input (if new to Inline Script Metadata, check this article).
And it works pretty well! 📈
$ uv run sentiment.py --text "Oh great, another bug in production! This just made my day."
Polarity: -0.5
Subjectivity: 0.8
Summary: The sentiment is quite negative with a high level of subjectivity, expressing frustration and sarcasm.
$ uv run sentiment.py --text "The weather was cold, crisp, and refreshing. I loved it"
Polarity: 0.5
Subjectivity: 0.6
Summary: The sentiment is positive, reflecting enjoyment and appreciation of the weather.
$ uv run sentiment.py --text "The app crashed twice, but the debugging logs made it easy to fix."
Polarity: 0.0
Subjectivity: 0.5
Summary: The sentiment is neutral with balanced positive and negative statements about the app.
As demonstrated, AI appears more capable than TextBlob, which returned the following scores for the same texts:
$ uv run sentiment_textblob.py
Oh great, another bug in production! This just made my day.
1.0 # did not catch frustration / sarcasm!
0.75
The weather was cold, crisp, and refreshing. I loved it.
0.2125 # could have been more positive
0.8041666666666667
The app crashed twice, but the debugging logs made it easy to fix.
0.43333333333333335 # more on the positive side
0.8333333333333334
As seen in the examples, TextBlob struggles with sarcasm and contextual sentiment, whereas AI-powered models handle nuance significantly better.
So for the ease of TextBlob, if we’re willing to pay a bit of money, using openai’s API we can make this even more sophisticated with almost equally little code. 📈
Combining this with alerts or dashboards could provide real-time insights into user sentiment, allowing for faster and more targeted exercise improvements.
Try running one of those scripts, or something similar, on your own data and share your findings in our community. 💡