Week 12 Lecture: File I/O
Until now, all programs lost their data when they finished running. File I/O (Input/Output) changes this by allowing programs to:
- Save data permanently — Data persists between runs
- Process large datasets — Handle data impractical to type manually
- Share data — Exchange information with other programs
- Create outputs — Generate logs, reports, and exports
Reading from Text Files
What is File I/O?
File I/O refers to reading data from files (Input) and writing data to files (Output). Files are containers of data stored on disk that persist when programs aren’t running.
Example: A gradebook program uses file I/O to save grades persistently, avoiding re-entry on each run. This builds upon previous concepts: loops process files line by line, string methods (.strip(), .split()) parse data, lists/dictionaries store data in memory, and exception handling manages file errors.
Fundamental Concepts
Four essential principles:
- Files must be opened before use and closed when done
- Files are read as strings—type conversion is required for numbers
- Lines end with newline characters (
\n) - File modes control allowed operations
Syntax for Reading Files
The basic pattern for reading a file involves three steps: opening, reading, and closing.
file = open("filename.txt", "r") # Open file in read mode
content = file.read() # Read entire file as one string
file.close() # Always close the file when done
File Modes for Reading
| Mode | Description |
|---|---|
"r" |
Read mode (default). File must exist. |
"r+" |
Read and write. File must exist. |
Methods for Reading File Content
Python provides three different methods for reading file content, each suited to different situations:
file.read() # Returns entire file as a single string
file.readline() # Returns the next line (including \n)
file.readlines() # Returns a list of all lines (each includes \n)
Iterating Through a File
File objects in Python are iterable, meaning they can be used directly in a for loop. This approach is memory-efficient because it doesn’t load the entire file at once:
file = open("filename.txt", "r")
for line in file: # File objects are iterable
print(line) # Each iteration gives you one line
file.close()
Navigating Within a File
Sometimes you need to move the “cursor” (read/write position) within a file. Python provides two methods for this:
file.tell(): Returns the current cursor position (in bytes/characters).file.seek(offset): Moves the cursor to a specific byte position.
file = open("data.txt", "r")
print(file.read(5)) # Reads first 5 characters
print(file.tell()) # Prints 5 (current position)
file.seek(0) # Moves cursor back to the beginning
print(file.read(5)) # Reads first 5 characters again
file.close()
Common Mistakes to Avoid
- Forgetting to close the file — This causes resource leaks and potential data corruption
- Reading a non-existent file — This raises a
FileNotFoundError - Forgetting that
read()returns strings — Even numeric data comes in as text - Not handling newline characters — Each line includes
\nat the end
Example: Analyzing Temperature Data
Consider a file called temperatures.txt containing daily temperature readings from a weather station. Each line contains a single temperature value (in Celsius) as a decimal number:
23.5
18.2
25.1
22.8
19.5
27.3
21.0
The following program reads all temperatures from the file, calculates the average temperature, and finds the highest and lowest values:
def analyze_temperatures(filename):
file = open(filename, "r")
temperatures = []
for line in file:
temp = float(line.strip()) # .strip() removes the \n character
temperatures.append(temp)
file.close()
average = sum(temperatures) / len(temperatures)
highest = max(temperatures)
lowest = min(temperatures)
return average, highest, lowest
avg, high, low = analyze_temperatures("temperatures.txt")
print(f"Average: {avg:.1f}°C")
print(f"Highest: {high}°C")
print(f"Lowest: {low}°C")
Output:
Average: 22.5°C
Highest: 27.3°C
Lowest: 18.2°C
Key techniques: .strip() removes newlines before type conversion (“23.5\n” → “23.5” → 23.5). The for line in file pattern is memory-efficient, loading one line at a time. The pattern float(line.strip()) is standard: clean first, then convert. Always close files to prevent corruption or access issues.
The with Statement for Automatic File Handling
Understanding Context Managers
The with statement automatically closes files when the block ends, even if errors occur. This “context manager” pattern prevents resource leaks and data corruption, ensuring cleanup without explicit try/finally blocks.
Four key characteristics:
- Files automatically close when the
withblock ends - Closure happens even if exceptions occur
- This is the preferred pattern in Python
- Multiple files can be opened in one statement
Syntax for the with Statement
The basic pattern replaces the manual open-process-close sequence with a cleaner structure:
with open("filename.txt", "r") as file:
content = file.read()
# Work with content here
# File is automatically closed here, outside the with block
The as keyword assigns the opened file object to a variable name. Any valid variable name can be used (e.g., as f, as data_file, as input_stream).
When working with multiple files simultaneously, they can be opened in a single with statement:
with open("input.txt", "r") as infile, open("output.txt", "w") as outfile:
data = infile.read()
outfile.write(data)
There are three syntax rules to keep in mind when using with:
- The file variable is only accessible inside the
withblock - All file operations must be indented inside the block
- Never call
file.close()explicitly—closure happens automatically
Example: Analyzing Student Scores
Consider a file called scores.txt containing student exam scores. Each line has a student’s name followed by their score, separated by a comma:
Alice,85
Bob,72
Charlie,91
Diana,68
Eve,88
Frank,75
Grace,95
The following program reads the file using the with statement, creates a dictionary mapping each student name to their score, and identifies all students who scored above the class average:
def find_above_average_students(filename):
scores = {}
with open(filename, "r") as file:
for line in file:
parts = line.strip().split(",")
name = parts[0]
score = int(parts[1])
scores[name] = score
# File is now closed - we can still work with our dictionary
average = sum(scores.values()) / len(scores)
above_average = []
for name, score in scores.items():
if score > average:
above_average.append(name)
return above_average, average
students, avg = find_above_average_students("scores.txt")
print(f"Class average: {avg:.1f}")
print(f"Above average students: {students}")
Output:
Class average: 82.0
Above average students: ['Alice', 'Charlie', 'Eve', 'Grace']
Key concepts: The file variable exists only inside the with block, but extracted data (like scores) persists afterward, allowing prompt file closure while enabling further processing.
The pattern line.strip().split(",") is standard for CSV data: remove newlines, then split on delimiters. The code separates reading from processing—read all data, close file, then analyze. No explicit close() call is needed, eliminating the risk of forgetting or exceptions preventing closure.
Writing to Text Files
Understanding File Writing
Writing saves data permanently for later access by programs or text editors. Uses include preferences, reports, exports, and persistent records. This leverages f-strings for formatting, loops for multiple items, and string methods for preparation.
Four essential principles:
- Write mode (
"w") overwrites — deletes existing content - Append mode (
"a") adds to end — preserves existing content - Only strings can be written — use
str()or f-strings for conversion - Newlines are manual — must include
\nexplicitly
Syntax for Writing Files
File Modes for Writing
| Mode | Description |
|---|---|
"w" |
Write mode. Creates file if it doesn’t exist. Overwrites if it does! |
"a" |
Append mode. Creates file if it doesn’t exist. Adds to end if it does. |
"r+" |
Read and write. File must exist. |
Writing Methods
Python provides two methods for writing content to files:
file.write(string) # Write a string to the file (no automatic newline)
file.writelines(list) # Write a list of strings (no automatic newlines)
Basic Write Pattern
The standard pattern for writing creates or overwrites a file:
with open("output.txt", "w") as file:
file.write("First line\n")
file.write("Second line\n")
Append Pattern
To add content to an existing file without destroying its current contents, use append mode:
with open("log.txt", "a") as file:
file.write("New entry added\n")
Common Mistakes to Avoid
- Using
"w"when you meant to append — this destroys all existing data - Forgetting
\nat the end of lines — all content will run together on one line - Writing non-strings directly —
file.write(42)causes a TypeError; usefile.write(str(42))instead - Assuming
writelines()adds newlines — it does not add any characters between items
Example: Generating a Grade Report
Consider building a grade report generator. Given a dictionary of student names and scores, the program should create a formatted report file containing a header, each student’s information with their letter grade, and summary statistics.
The grading scale is:
- A: 90–100
- B: 80–89
- C: 70–79
- D: 60–69
- F: Below 60
def get_letter_grade(score):
if score >= 90:
return "A"
elif score >= 80:
return "B"
elif score >= 70:
return "C"
elif score >= 60:
return "D"
else:
return "F"
def generate_grade_report(grades, filename):
with open(filename, "w") as file:
# Write header
file.write("=" * 40 + "\n")
file.write(" GRADE REPORT\n")
file.write("=" * 40 + "\n\n")
# Write each student's record
file.write(f"{'Name':<15}{'Score':<10}{'Grade'}\n")
file.write("-" * 30 + "\n")
for name, score in grades.items():
letter = get_letter_grade(score)
file.write(f"{name:<15}{score:<10}{letter}\n")
# Write summary
average = sum(grades.values()) / len(grades)
file.write("\n" + "-" * 30 + "\n")
file.write(f"Class Average: {average:.1f}\n")
file.write("=" * 40 + "\n")
grades = {
"Alice": 92,
"Bob": 78,
"Charlie": 85,
"Diana": 67,
"Eve": 91
}
generate_grade_report(grades, "grade_report.txt")
print("Report generated successfully!")
# Verify by reading the file back
with open("grade_report.txt", "r") as file:
print(file.read())
Output:
Report generated successfully!
========================================
GRADE REPORT
========================================
Name Score Grade
------------------------------
Alice 92 A
Bob 78 C
Charlie 85 B
Diana 67 D
Eve 91 A
------------------------------
Class Average: 82.6
========================================
Key techniques: Unlike print(), write() doesn’t add newlines automatically—every \n must be explicit. F-strings format output: f"{name:<15}" creates a left-aligned 15-character field for column alignment. String multiplication ("=" * 40) creates visual separators efficiently.
Using write mode ("w"), each run replaces previous content—no duplicates accumulate. For accumulation, use append mode ("a").
Append Mode and Building Persistent Logs
Understanding Append Mode
Append mode ("a") positions the write cursor at file end, preserving existing content. Used for logs, transaction history, high scores, and journals where data accumulates over runs.
Four characteristics:
- Creates file if missing — safe for new files
- Never modifies existing content — only adds at end
- Accumulates across runs — data grows over time
- No automatic newlines —
\nstill required
Syntax for Append Mode
The basic append pattern looks similar to write mode, but uses "a" instead of "w":
with open("log.txt", "a") as file:
file.write("New entry\n")
Comparison of Write and Append Modes
The difference between these modes is significant and worth understanding clearly:
# Write mode - replaces everything
with open("data.txt", "w") as f:
f.write("This replaces all content\n")
# Append mode - adds to end
with open("data.txt", "a") as f:
f.write("This is added to the end\n")
If data.txt contained previous content, write mode would erase it entirely before writing the new text. Append mode would preserve all existing content and add the new text after it.
Building Log Entries
Log entries typically include contextual information formatted into a consistent structure:
entry_number = 1
event = "User logged in"
with open("log.txt", "a") as file:
file.write(f"Entry {entry_number}: {event}\n")
Example: High Score Tracker
Consider building a high score tracker for a game. The system needs two functions: one to add new scores to a persistent file, and another to find the highest score recorded.
The add_score function appends a new score entry to the file in the format name,score. The get_high_score function reads the file and returns the name and score of the player with the highest score, or None if the file doesn’t exist or is empty.
def add_score(filename, player_name, score):
with open(filename, "a") as file:
file.write(f"{player_name},{score}\n")
def get_high_score(filename):
best_name = None
best_score = 0
try:
with open(filename, "r") as file:
for line in file:
if line.strip():
parts = line.strip().split(",")
name = parts[0]
score = int(parts[1])
if score > best_score:
best_score = score
best_name = name
except FileNotFoundError:
return None
return (best_name, best_score) if best_name else None
# Demo
scores_file = "highscores.txt"
# Start fresh for demonstration
with open(scores_file, "w") as f:
pass
# Add scores
add_score(scores_file, "Alice", 150)
add_score(scores_file, "Bob", 230)
add_score(scores_file, "Charlie", 180)
add_score(scores_file, "Alice", 275)
# Check high score
result = get_high_score(scores_file)
print(f"High score: {result[0]} with {result[1]} points")
# Show file contents
print("\nAll scores:")
with open(scores_file, "r") as f:
print(f.read())
Output:
High score: Alice with 275 points
All scores:
Alice,150
Bob,230
Charlie,180
Alice,275
Key aspects: Append mode auto-creates missing files—no existence checks needed. Data persists between runs; without “start fresh,” scores accumulate across executions.
The get_high_score function uses try/except for defensive programming, returning None instead of crashing if the file is missing.
Contrast: Write mode ("w") creates empty files (erases content), while append mode ("a") preserves existing content. The check if line.strip() skips blank lines that would cause parsing errors.
Processing Files Line-by-Line and File Paths
Line-by-Line Processing
Large files may exceed available memory. Python’s iterable file objects allow line-by-line loops, keeping only one line in memory at a time.
Understanding File Paths
Simple filenames like "data.txt" assume files are in the script’s directory. Real projects organize files into directory structures requiring explicit paths.
Four key principles:
- Iterating is memory-efficient —
for line in fileloads one line at a time - Relative paths —
"data/input.txt"references subfolders from current location - Absolute paths —
"C:/Users/name/Documents/file.txt"specifies complete location - Forward slashes work everywhere — use
/on all platforms
Syntax for Line-by-Line Processing
The basic pattern for processing files one line at a time uses a for loop directly on the file object:
with open("large_file.txt", "r") as file:
for line in file:
# Process one line at a time
processed = line.strip()
Syntax for File Paths
Different path patterns serve different purposes:
# Same directory as script
with open("data.txt", "r") as f:
pass
# Subdirectory
with open("data/input.txt", "r") as f:
pass
# Parent directory
with open("../data.txt", "r") as f:
pass
# Absolute path (Windows)
with open("C:/Users/student/Documents/data.txt", "r") as f:
pass
# Absolute path (Mac/Linux)
with open("/home/student/documents/data.txt", "r") as f:
pass
Relative paths (like "data/input.txt" or "../data.txt") are interpreted relative to the current working directory—typically the directory from which the Python script is run. The .. notation means “go up one directory level.”
Absolute paths specify the complete location starting from the root of the file system. On Windows, paths begin with a drive letter like C:/. On Mac and Linux, paths begin with a forward slash /.
Counting Lines Efficiently
The line-by-line approach allows processing of arbitrarily large files without memory concerns:
line_count = 0
with open("huge_file.txt", "r") as file:
for line in file:
line_count += 1
print(f"Total lines: {line_count}")
This code could count the lines in a file containing billions of entries, because only one line exists in memory at any time.
Example: Counting Task Statuses
Consider a file tasks.txt where each line contains a task description and its status, separated by a comma:
Buy groceries,done
Finish homework,pending
Call mom,done
Pay bills,pending
Clean room,pending
The following function counts how many tasks exist for each status:
# Create sample file
sample = """Buy groceries,done
Finish homework,pending
Call mom,done
Pay bills,pending
Clean room,pending"""
with open("tasks.txt", "w") as f:
f.write(sample)
def count_tasks(filename):
counts = {}
with open(filename, "r") as file:
for line in file:
parts = line.strip().split(",")
status = parts[1]
if status in counts:
counts[status] += 1
else:
counts[status] = 1
return counts
result = count_tasks("tasks.txt")
print(f"Task summary: {result}")
print(f"Pending tasks: {result['pending']}")
Output:
Task summary: {'done': 2, 'pending': 3}
Pending tasks: 3
Key aspects: The function never loads all lines simultaneously—each is read, parsed, counted, then discarded. This scales from five to five million tasks identically.
The for line in file construct is standard for memory-efficient processing. The dictionary counting pattern (check if key exists, increment or initialize) applies naturally to file analysis.
Working with CSV Files
Understanding CSV Format
CSV (“Comma-Separated Values”) stores tabular data with rows as lines and values separated by commas. It’s common because spreadsheets export/import it, many APIs provide it, it’s human-readable, and it suits data analysis.
Four essential characteristics:
- First row is often a header — column names, not data
- Subsequent rows are records — one row per entity
- Commas in values cause problems — advanced handling uses quotes
- Basic CSV is text — standard string methods suffice
Syntax for CSV Operations
Basic CSV Reading Pattern
The standard approach reads the header separately from the data rows:
with open("data.csv", "r") as file:
header = file.readline().strip().split(",") # First line is headers
data = []
for line in file: # Remaining lines are data
values = line.strip().split(",")
data.append(values)
After reading the header with readline(), the file object’s position advances to the second line. The subsequent for loop therefore iterates only over the data rows.
Creating Dictionaries from CSV Data
A more powerful pattern converts each row into a dictionary, using the header values as keys:
with open("data.csv", "r") as file:
lines = file.readlines()
headers = lines[0].strip().split(",")
records = []
for line in lines[1:]: # Skip header row
values = line.strip().split(",")
record = {}
for i in range(len(headers)):
record[headers[i]] = values[i]
records.append(record)
This approach produces a list of dictionaries where each dictionary represents one row. Accessing data by column name (like record["name"]) is often clearer than accessing by index (like values[0]).
Writing CSV Files
Creating CSV files follows the same principles as writing any text file, with commas separating values:
with open("output.csv", "w") as file:
file.write("name,age,city\n") # Header
file.write("Alice,25,New York\n")
file.write("Bob,30,Boston\n")
Example: Finding Passing Students
Consider a CSV file students.csv containing student names and grades:
name,grade
Alice,85
Bob,72
Charlie,91
Diana,68
The following function returns a list of names of students who scored at or above a specified passing grade:
# Create sample file
csv_data = """name,grade
Alice,85
Bob,72
Charlie,91
Diana,68"""
with open("students.csv", "w") as f:
f.write(csv_data)
def get_passing_students(filename, passing_grade):
passing = []
with open(filename, "r") as file:
file.readline() # Skip header row
for line in file:
parts = line.strip().split(",")
name = parts[0]
grade = int(parts[1])
if grade >= passing_grade:
passing.append(name)
return passing
result = get_passing_students("students.csv", 75)
print(f"Passing students (75+): {result}")
result = get_passing_students("students.csv", 85)
print(f"Passing students (85+): {result}")
Output:
Passing students (75+): ['Alice', 'Charlie']
Passing students (85+): ['Alice', 'Charlie']
Key techniques: file.readline() reads and discards the header, positioning the file for data-only processing. Without this, parsing “name” and “grade” as data would cause errors.
Type conversion is essential—all CSV values are strings. Convert with int(parts[1]) before numeric comparison. The threshold parameter makes the function reusable for different passing levels.
Consolidation Problem 1: The Sales Analytics Report
Context
Businesses generate logs of every transaction that occurs. A common task for a programmer is to take a raw log of individual sales and “aggregate” them—meaning, group them together to find totals. This problem combines file parsing, math, and dictionary accumulation.
Concepts Practiced
- Parsing CSV data: Splitting strings and converting types
- The Accumulator Pattern: Using a dictionary to sum values by category
- Reporting: Formatting strings to create a clean output file
Problem Statement
You are analyzing a daily sales log for a coffee shop. The system generates a raw text file called transactions.txt. Each line represents a single sale in the format:
item_name,quantity,price_per_unit
Write a program that processes this file to generate a summary report.
Requirements:
- Read the data from
transactions.txt. - Calculate the total revenue generated for each unique item (Revenue = Quantity × Price).
- Identify the Best Selling Item (the item that generated the highest total revenue).
- Write a new file called
daily_report.txtthat lists:- A header.
- Each item name and its total revenue.
- A footer section highlighting the Best Selling Item and the Total Daily Revenue (sum of all items).
Input Data (transactions.txt):
Cappuccino,2,4.50
Espresso,5,3.00
Latte,1,5.00
Cappuccino,1,4.50
Muffin,3,2.50
Latte,2,5.00
Espresso,1,3.00
Approach
- Create an empty dictionary to store item totals.
- Loop through the file line by line.
- For each line, calculate the specific sale amount and add it to the correct key in the dictionary.
- After the loop, find the maximum value in the dictionary.
- Open a new file in write mode and use f-strings to format the output nicely.
Solution
def generate_sales_report(input_file, output_file):
# Dictionary to store total revenue per item
# Structure: { "ItemName": total_money_float }
item_revenue = {}
# --- Step 1: Read and Process Data ---
try:
with open(input_file, "r") as file:
for line in file:
# Clean and parse the line
parts = line.strip().split(",")
name = parts[0]
quantity = int(parts[1])
price = float(parts[2])
sale_total = quantity * price
# Accumulate revenue in dictionary
if name in item_revenue:
item_revenue[name] += sale_total
else:
item_revenue[name] = sale_total
except FileNotFoundError:
print(f"Error: {input_file} not found.")
return
# --- Step 2: Analyze Data ---
# Calculate grand total
total_daily_revenue = sum(item_revenue.values())
# Find the item with the highest revenue
# We set initial values to handle the search
best_item = ""
max_rev = -1
for item, rev in item_revenue.items():
if rev > max_rev:
max_rev = rev
best_item = item
# --- Step 3: Write Report ---
with open(output_file, "w") as file:
file.write("DAILY SALES SUMMARY\n")
file.write("-------------------\n")
# Write individual item totals
for item, revenue in item_revenue.items():
file.write(f"{item}: ${revenue:.2f}\n")
# Write footer statistics
file.write("\n-------------------\n")
file.write(f"Total Revenue: ${total_daily_revenue:.2f}\n")
file.write(f"Best Seller: {best_item} (${max_rev:.2f})\n")
# --- Setup and Execution ---
# Create dummy data for testing
sample_data = """Cappuccino,2,4.50
Espresso,5,3.00
Latte,1,5.00
Cappuccino,1,4.50
Muffin,3,2.50
Latte,2,5.00
Espresso,1,3.00"""
with open("transactions.txt", "w") as f:
f.write(sample_data)
# Run the program
generate_sales_report("transactions.txt", "daily_report.txt")
# Verify by reading the output to console
print("Report generated. Contents:")
with open("daily_report.txt", "r") as f:
print(f.read())
Expected Output (daily_report.txt):
DAILY SALES SUMMARY
-------------------
Cappuccino: $13.50
Espresso: $18.00
Latte: $15.00
Muffin: $7.50
-------------------
Total Revenue: $54.00
Best Seller: Espresso ($18.00)
Consolidation Problem 2: The Data Merger & Error Logger
Context
A very common task in data science and systems programming is “ETL” (Extract, Transform, Load). You often have data in one file (like scores) that references data in another file (like student names). Furthermore, data is rarely perfect—it often contains typos or missing IDs.
This problem simulates a gradebook system where you must merge two files while simultaneously filtering out “bad” data.
Concepts Practiced
- Multi-file Handling: Using
withto manage input and multiple output files - Data Validation: Checking if data exists in a dictionary (Set membership)
- Exception Handling: Using
try/exceptto catch data type errors - Error Logging: Separating valid data from invalid data
Problem Statement
You are building a grade management system. You have two files:
students.txt: A master list of Student IDs and Names (ID,Name).raw_scores.txt: A list of exam results (ID,Score).
However, the raw_scores.txt file is “dirty”—it contains IDs that don’t exist in the master list, and some scores are corrupted (words instead of numbers).
Write a program that processes these files to:
- Create a Final Gradebook (
final_grades.txt) with validName: Scoreentries. - Create an Error Log (
error_log.txt) listing every line that could not be processed and why.
Input Data:
students.txt (Master List):
101,Alice
102,Bob
103,Charlie
raw_scores.txt (Data to Process):
101,88
102,ninety
999,75
103,92
101,95
Approach
- Load the lookup table: Read
students.txtfirst and store it in a dictionary (id -> name). This allows for O(1) speed lookups. - Open three files at once: Open the raw scores (read), the final grades (write), and the error log (write).
- Validate: For every score line, check two things:
- Does the ID exist in our dictionary?
- Is the score a valid integer?
- Route the data: If valid, write to the gradebook. If invalid, write to the error log.
Solution
def process_grades(student_file, score_file):
# --- Step 1: Load Master Student List ---
student_db = {}
try:
with open(student_file, "r") as f:
for line in f:
parts = line.strip().split(",")
# Map ID (key) to Name (value)
student_db[parts[0]] = parts[1]
except FileNotFoundError:
print("Master student file missing!")
return
# --- Step 2: Process Scores ---
# We open raw_scores to read, and TWO other files to write
with open(score_file, "r") as infile, \
open("final_grades.txt", "w") as valid_file, \
open("error_log.txt", "w") as error_file:
# Write header for the valid file
valid_file.write(f"{'Name':<15} Score\n")
valid_file.write("-" * 25 + "\n")
for line in infile:
clean_line = line.strip()
if not clean_line: continue # Skip empty lines
parts = clean_line.split(",")
student_id = parts[0]
raw_score = parts[1]
# --- Validation Logic ---
# Check 1: Does the ID exist in our master list?
if student_id not in student_db:
error_file.write(f"Unknown Student ID: {clean_line}\n")
continue # Skip to next iteration of loop
# Check 2: Is the score actually a number?
try:
score = int(raw_score)
except ValueError:
error_file.write(f"Invalid Score Format: {clean_line}\n")
continue # Skip to next iteration
# --- Success Path ---
# If we get here, data is valid. Retrieve name and write.
student_name = student_db[student_id]
valid_file.write(f"{student_name:<15} {score}\n")
# --- Setup and Execution ---
# Create the files needed for the problem
with open("students.txt", "w") as f:
f.write("101,Alice\n102,Bob\n103,Charlie")
with open("raw_scores.txt", "w") as f:
f.write("101,88\n102,ninety\n999,75\n103,92\n101,95")
# Run Program
process_grades("students.txt", "raw_scores.txt")
# Verify Results
print("--- Content of final_grades.txt ---")
with open("final_grades.txt", "r") as f:
print(f.read())
print("\n--- Content of error_log.txt ---")
with open("error_log.txt", "r") as f:
print(f.read())
Expected Output (final_grades.txt):
Name Score
-------------------------
Alice 88
Charlie 92
Alice 95
Expected Output (error_log.txt):
Invalid Score Format: 102,ninety
Unknown Student ID: 999,75
This content will be available starting December 16, 2025.