How to Scan SQL Rows With Unknown Columns in Go (SELECT *)
To scan SQL rows with unknown columns in Go, use rows.Columns() to discover column names at runtime, then build a []interface{} slice of *interface{} pointers and pass it to rows.Scan with the spread operator. After each scan, zip the column names and values into a map[string]interface{}. This is the standard pattern for handling SELECT * or any query whose shape you don't know at compile time.
Here is a complete, reusable function that returns each row as a map keyed by column name.
func scanRowsToMaps(rows *sql.Rows) ([]map[string]interface{}, error) {
columns, err := rows.Columns()
if err != nil {
return nil, err
}
var results []map[string]interface{}
for rows.Next() {
// Allocate a fresh slice of interface{} values per row.
values := make([]interface{}, len(columns))
pointers := make([]interface{}, len(columns))
for i := range values {
pointers[i] = &values[i]
}
if err := rows.Scan(pointers...); err != nil {
return nil, err
}
row := make(map[string]interface{}, len(columns))
for i, name := range columns {
row[name] = values[i]
}
results = append(results, row)
}
return results, rows.Err()
}
Why the Double Slice of Pointers
The Scan method expects pointers to destinations, one per column. When you don't know the types, you scan into interface{} holes and let the driver decide what concrete Go type to put there. The values slice holds the actual data, and pointers holds &values[i] for each index so Scan has somewhere to write. You must allocate both slices fresh per row, otherwise every map entry ends up aliasing the same underlying array.
Gotcha: Driver Types Are Not Always What You Expect
The most common surprise is that MySQL drivers (notably go-sql-driver/mysql) return VARCHAR, TEXT, and DATETIME columns as []byte rather than string. If you JSON-marshal the result map directly, you get base64-encoded blobs instead of readable strings. Convert them explicitly.
for i, name := range columns {
val := values[i]
// Coerce []byte to string so JSON output is readable.
if b, ok := val.([]byte); ok {
row[name] = string(b)
} else {
row[name] = val
}
}
Postgres (via lib/pq or pgx's stdlib adapter) behaves better here and returns native Go types for most columns, but you should still handle []byte defensively if your code might run against multiple databases. SQLite with mattn/go-sqlite3 also returns []byte for TEXT columns in some configurations.
Handling NULL Values
When a column is SQL NULL, the corresponding interface{} entry is the Go value nil. That's usually what you want for JSON output. If you need richer typing, scan into sql.RawBytes or a type-aware wrapper, but for generic row-to-map work the plain interface{} path handles NULLs correctly without extra code.
Getting Column Types Too
If you need the SQL type (for example, to render a schema or choose how to format output), use rows.ColumnTypes() instead of Columns(). Each *ColumnType exposes DatabaseTypeName(), nullability (when the driver supports it), and the Go scan type.
types, err := rows.ColumnTypes()
if err != nil {
return nil, err
}
for _, t := range types {
fmt.Printf("%s: db=%s go=%s\n",
t.Name(), t.DatabaseTypeName(), t.ScanType())
}
When to Reach for a Library Instead
Rolling this yourself is fine for a one-off admin tool or a generic query endpoint. If you're doing it across a codebase, jmoiron/sqlx gives you rows.MapScan(dest), which does exactly this in one call, and rows.SliceScan() if you want positional values. For heavier use, pgx (Postgres-only) has pgx.CollectRows with pgx.RowToMap, which handles types natively without the []byte dance.
// With jmoiron/sqlx.
rows, err := db.Queryx("SELECT * FROM users WHERE id = ?", id)
if err != nil {
return nil, err
}
defer rows.Close()
for rows.Next() {
row := make(map[string]interface{})
if err := rows.MapScan(row); err != nil {
return nil, err
}
fmt.Println(row)
}
One Last Pitfall
Always call defer rows.Close() and check rows.Err() after the loop. rows.Next() returns false both on normal end-of-iteration and on error, so without the final Err() check you silently swallow connection failures mid-stream. The pattern above does this correctly. Copy it rather than retyping the loop structure each time.