Skip to content
Pybites

Pybites

  • Training
  • Certifications
  • Community
  • Code Platform
  • Articles
  • Search
Tools, Modules

Create Project-Less Python Utilities with uv and Inline Script Metadata

Bob

By Bob Belderbos on 17 January 2025

The other day I wanted to demo the Google Books API (we use for Pybites Books) to somebody so I started to write some code on the fly to call its endpoints using httpx.

Then I thought it would be nice to turn it into a small script to search for books and view details from the command line.

And, to make it a one-off script that I could run without having to create a full Python project.

Enter inline script metadata + uv

Now with PEP 723 – Inline script metadata 🐍 you can:  

This PEP specifies a metadata format that can be embedded in single-file Python scripts to assist launchers, IDEs and other external tools which may need to interact with such scripts.

And uv supports it (see docs) πŸ“ˆ

The code

See the script below I came up with (see also this gist).

Apart from httpx I use Typer which makes it very easy to create command line apps, and a bit of BeautifulSoup:

# /// script
# dependencies = [
#   "bs4",
#   "httpx",
#   "typer"
# ]
# ///
import textwrap

from bs4 import BeautifulSoup
import httpx
import typer

app = typer.Typer()

BASE_URL = "https://www.googleapis.com/books/v1/volumes"
BOOK_URL = BASE_URL + "/{}"
SEARCH_URL = BASE_URL + "?q={}&langRestrict=en"


def search_books(term: str):
    """Search books by term."""
    query = SEARCH_URL.format(term)
    response = httpx.get(query)
    response.raise_for_status()

    books = []
    for item in response.json().get("items", []):
        try:
            google_id = item["id"]
            title = item["volumeInfo"]["title"]
            books.append((google_id, title))
        except KeyError:
            continue

    return books


def get_book_details(book_id: str):
    """Retrieve details for a specific book."""
    book_url = BOOK_URL.format(book_id)
    response = httpx.get(book_url)
    response.raise_for_status()
    return response.json().get("volumeInfo", {})


def clean_and_shorten_description(description: str, max_length: int = 300):
    """Remove HTML tags from the description and truncate it."""
    plain_text = BeautifulSoup(description, "html.parser").get_text()
    return textwrap.shorten(plain_text, width=max_length, placeholder="...")


@app.command()
def search(terms: list[str] = typer.Argument(..., help="Book search terms")):
    """Search for books and select one to view details."""
    search_string = " ".join(terms)
    books = search_books(search_string)

    if not books:
        typer.echo("No books found.")
        raise typer.Exit()

    typer.echo("Books found:")
    for idx, (book_id, title) in enumerate(books, start=1):
        typer.echo(f"{idx}. {title}")

    selection = typer.prompt(
        "Enter the number of the book you want details for", type=int
    )
    if selection < 1 or selection > len(books):
        typer.echo("Invalid selection.")
        raise typer.Exit()

    selected_book_id = books[selection - 1][0]
    typer.echo("Fetching details...")

    details = get_book_details(selected_book_id)

    typer.echo("\nBook Details:")
    typer.echo(f"Title: {details.get('title', 'N/A')}")
    typer.echo(f"Subtitle: {details.get('subtitle', 'N/A')}")
    typer.echo(f"Authors: {', '.join(details.get('authors', []))}")
    typer.echo(f"Publisher: {details.get('publisher', 'N/A')}")
    typer.echo(f"Published Date: {details.get('publishedDate', 'N/A')}")
    description = details.get("description", "N/A")
    typer.echo(f"Description: {clean_and_shorten_description(description)}")


if __name__ == "__main__":
    app()

Explanation:

  • The script starts with /// script and ends with /// to define the inline metadata, listing the required dependencies. You can also specify the Python version, e.g. # requires-python = ">=3.12" (see Astral docs)
  • We decorate the search function with Typer’s @app.command() to turn it into a CLI interface. terms: list[str] = typer.Argument(… makes it so that I can pass in various search terms which will be joined together (e.g. ‘3 body problem’). This function also handles having a user select a book and show its details.
  • The search_books() function calls the Google Books API to search for books. I am not sure about rate limits, but I did not have to generate an API key.
  • The get_book_details() function retrieves details for a specific book.
  • The clean_and_shorten_description() helper function removes HTML tags (using BeautifulSoup) from the description and truncates it. I learned that you can conveniently do this using textwrap.shorten() πŸŽ‰

How to run it

Assuming you save this script as book.py, you can run it using uv like this: uv run book.py.

That’s it, it will take care of installing the dependencies in an isolated environment and running the script without the need for a project directory and pyproject.toml file. πŸš€

So that’s it, a practical example of how you can leverage uv + inline script metadata to create project-less utilities. 🌟

Two more examples

I checked my scripts folder and I have used this a couple of times lately:

1. Summarize YouTube videos using youtube-transcript-api + Marvin AI -> yt_summary.py:

# /// script
# dependencies = [
#   "marvin",
#   "youtube-transcript-api",
# ]
# ///
import os
import sys

from youtube_transcript_api import YouTubeTranscriptApi
import marvin

if "MARVIN_OPENAI_API_KEY" not in os.environ:
    sys.exit(1)
    print("Set MARVIN_OPENAI_API_KEY in env first")


@marvin.fn
def make_summary(transcript: list[dict]) -> str:
    """
    Craft a concise description for YouTube of this YouTube video transcript.
    """


if __name__ == "__main__":
    if len(sys.argv) < 2:
        sys.exit(1)
        print("Please provide a YouTube id")

    video_id = sys.argv[1]
    transcript = YouTubeTranscriptApi.get_transcript(video_id)
    print(make_summary(transcript))

2. Scrape an article using Newspaper3k and return its text -> article.py:

# /// script
# dependencies = [
#   "newspaper3k",
#   "lxml_html_clean",
# ]
# ///
import sys
from newspaper import Article

url = sys.argv[1]
article = Article(url)
article.download()
article.parse()
print(article.text)

Again, uv run <script> is all you need, for example:

$ uv run yt_summary.py p4zy9UZYa0o
Reading inline script metadata from `yt_summary.py`
The podcast episode highlights a partnership with Believe Resourcing Group, ...

Super cool + convenient 😎 πŸ“ˆ


Try creating a utility script usingΒ inline script metadata + uvΒ and share your experience in our community …

Join our Python developer community

Want a career as a Python Developer but not sure where to start?

Related

Share This Article

Read More

  • All Articles
  • Author: Bob Belderbos
  • Tools
  • Modules
  • BeautifulSoup
  • books
  • cli
  • command line
  • Google Books API
  • httpx
  • inline script metadata
  • marvinai
  • PEP
  • project-less
  • textwrap
  • typer
  • uv
  • YouTube

PDM Program

  • Log In
  • About PDM

PyBites

  • Catalogue
  • Podcast
  • Articles
  • Platform
  • Community
  • Shop

Resources

  • About Us
  • Contact Us
  • Sitemap
  • Privacy Policy

Mailing List

Subscribe to our mailing list and receive our Python Tips ebook for free.

  • Twitter
  • LinkedIn
  • Facebook
  • YouTube
  • Github
  • RSS
Powered by coffee, © 2025 PyBites.