gotosocial/vendor/github.com/uptrace/bun
2021-11-13 12:29:08 +01:00
..
dialect update dependencies (#296) 2021-11-13 12:29:08 +01:00
extra/bunjson Pg to bun (#148) 2021-08-25 15:34:33 +02:00
internal update dependencies (#296) 2021-11-13 12:29:08 +01:00
migrate upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
schema update dependencies (#296) 2021-11-13 12:29:08 +01:00
.gitignore Pg to bun (#148) 2021-08-25 15:34:33 +02:00
.prettierrc.yml upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
bun.go upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
CHANGELOG.md update dependencies (#296) 2021-11-13 12:29:08 +01:00
commitlint.config.js update bun library to latest commit (#206) 2021-09-11 13:53:44 +02:00
CONTRIBUTING.md upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
db.go update dependencies (#296) 2021-11-13 12:29:08 +01:00
hook.go upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
LICENSE Pg to bun (#148) 2021-08-25 15:34:33 +02:00
Makefile upstep bun to v1.0.9 (#252) 2021-09-29 15:09:45 +02:00
model.go upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
model_map.go upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
model_map_slice.go upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
model_scan.go upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
model_slice.go upstep bun to v1.0.9 (#252) 2021-09-29 15:09:45 +02:00
model_table_has_many.go upstep bun to v1.0.9 (#252) 2021-09-29 15:09:45 +02:00
model_table_m2m.go upstep bun to v1.0.9 (#252) 2021-09-29 15:09:45 +02:00
model_table_slice.go upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
model_table_struct.go upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
package.json update dependencies (#296) 2021-11-13 12:29:08 +01:00
query_base.go update dependencies (#296) 2021-11-13 12:29:08 +01:00
query_column_add.go update dependencies (#296) 2021-11-13 12:29:08 +01:00
query_column_drop.go migrate go version to 1.17 (#203) 2021-09-10 14:42:14 +02:00
query_delete.go update dependencies (#296) 2021-11-13 12:29:08 +01:00
query_index_create.go migrate go version to 1.17 (#203) 2021-09-10 14:42:14 +02:00
query_index_drop.go migrate go version to 1.17 (#203) 2021-09-10 14:42:14 +02:00
query_insert.go upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
query_select.go update dependencies (#296) 2021-11-13 12:29:08 +01:00
query_table_create.go update dependencies (#296) 2021-11-13 12:29:08 +01:00
query_table_drop.go migrate go version to 1.17 (#203) 2021-09-10 14:42:14 +02:00
query_table_truncate.go migrate go version to 1.17 (#203) 2021-09-10 14:42:14 +02:00
query_update.go update dependencies (#296) 2021-11-13 12:29:08 +01:00
query_values.go migrate go version to 1.17 (#203) 2021-09-10 14:42:14 +02:00
README.md update dependencies (#296) 2021-11-13 12:29:08 +01:00
relation_join.go upstep bun to v1.0.14 (#291) 2021-10-24 13:14:37 +02:00
util.go Pg to bun (#148) 2021-08-25 15:34:33 +02:00
version.go update dependencies (#296) 2021-11-13 12:29:08 +01:00

All-in-one tool to optimize performance and monitor errors & logs

Simple and performant client for PostgreSQL, MySQL, and SQLite

build workflow PkgGoDev Documentation Chat

Status: API freeze (stable release). Note that all sub-packages (mainly extra/* packages) are not part of the API freeze and are developed independently. You can think of them as of 3rd party packages that share one repo with the core.

Main features are:

Resources:

Projects using Bun:

github.com/frederikhors/orm-benchmark results
  4000 times - Insert
  raw_stmt:     0.38s        94280 ns/op     718 B/op     14 allocs/op
       raw:     0.39s        96719 ns/op     718 B/op     13 allocs/op
 beego_orm:     0.48s       118994 ns/op    2411 B/op     56 allocs/op
       bun:     0.57s       142285 ns/op     918 B/op     12 allocs/op
        pg:     0.58s       145496 ns/op    1235 B/op     12 allocs/op
      gorm:     0.70s       175294 ns/op    6665 B/op     88 allocs/op
      xorm:     0.76s       189533 ns/op    3032 B/op     94 allocs/op

  4000 times - MultiInsert 100 row
       raw:     4.59s      1147385 ns/op  135155 B/op    916 allocs/op
  raw_stmt:     4.59s      1148137 ns/op  131076 B/op    916 allocs/op
 beego_orm:     5.50s      1375637 ns/op  179962 B/op   2747 allocs/op
       bun:     6.18s      1544648 ns/op    4265 B/op    214 allocs/op
        pg:     7.01s      1753495 ns/op    5039 B/op    114 allocs/op
      gorm:     9.52s      2379219 ns/op  293956 B/op   3729 allocs/op
      xorm:    11.66s      2915478 ns/op  286140 B/op   7422 allocs/op

  4000 times - Update
  raw_stmt:     0.26s        65781 ns/op     773 B/op     14 allocs/op
       raw:     0.31s        77209 ns/op     757 B/op     13 allocs/op
 beego_orm:     0.43s       107064 ns/op    1802 B/op     47 allocs/op
       bun:     0.56s       139839 ns/op     589 B/op      4 allocs/op
        pg:     0.60s       149608 ns/op     896 B/op     11 allocs/op
      gorm:     0.74s       185970 ns/op    6604 B/op     81 allocs/op
      xorm:     0.81s       203240 ns/op    2994 B/op    119 allocs/op

  4000 times - Read
       raw:     0.33s        81671 ns/op    2081 B/op     49 allocs/op
  raw_stmt:     0.34s        85847 ns/op    2112 B/op     50 allocs/op
 beego_orm:     0.38s        94777 ns/op    2106 B/op     75 allocs/op
        pg:     0.42s       106148 ns/op    1526 B/op     22 allocs/op
       bun:     0.43s       106904 ns/op    1319 B/op     18 allocs/op
      gorm:     0.65s       162221 ns/op    5240 B/op    108 allocs/op
      xorm:     1.13s       281738 ns/op    8326 B/op    237 allocs/op

  4000 times - MultiRead limit 100
       raw:     1.52s       380351 ns/op   38356 B/op   1037 allocs/op
  raw_stmt:     1.54s       385541 ns/op   38388 B/op   1038 allocs/op
        pg:     1.86s       465468 ns/op   24045 B/op    631 allocs/op
       bun:     2.58s       645354 ns/op   30009 B/op   1122 allocs/op
 beego_orm:     2.93s       732028 ns/op   55280 B/op   3077 allocs/op
      gorm:     4.97s      1241831 ns/op   71628 B/op   3877 allocs/op
      xorm:     doesn't work

Why another database client?

So you can elegantly write complex queries:

regionalSales := db.NewSelect().
	ColumnExpr("region").
	ColumnExpr("SUM(amount) AS total_sales").
	TableExpr("orders").
	GroupExpr("region")

topRegions := db.NewSelect().
	ColumnExpr("region").
	TableExpr("regional_sales").
	Where("total_sales > (SELECT SUM(total_sales) / 10 FROM regional_sales)")

err := db.NewSelect().
	With("regional_sales", regionalSales).
	With("top_regions", topRegions).
	ColumnExpr("region").
	ColumnExpr("product").
	ColumnExpr("SUM(quantity) AS product_units").
	ColumnExpr("SUM(amount) AS product_sales").
	TableExpr("orders").
	Where("region IN (SELECT region FROM top_regions)").
	GroupExpr("region").
	GroupExpr("product").
	Scan(ctx)
WITH regional_sales AS (
    SELECT region, SUM(amount) AS total_sales
    FROM orders
    GROUP BY region
), top_regions AS (
    SELECT region
    FROM regional_sales
    WHERE total_sales > (SELECT SUM(total_sales)/10 FROM regional_sales)
)
SELECT region,
       product,
       SUM(quantity) AS product_units,
       SUM(amount) AS product_sales
FROM orders
WHERE region IN (SELECT region FROM top_regions)
GROUP BY region, product

Installation

go get github.com/uptrace/bun

You also need to install a database/sql driver and the corresponding Bun dialect.

Quickstart

First you need to create a sql.DB. Here we are using the sqliteshim driver which chooses between modernc.org/sqlite and mattn/go-sqlite3 depending on your platform.

import "github.com/uptrace/bun/driver/sqliteshim"

sqldb, err := sql.Open(sqliteshim.ShimName, "file::memory:?cache=shared")
if err != nil {
	panic(err)
}

And then create a bun.DB on top of it using the corresponding SQLite dialect that comes with Bun:

import (
	"github.com/uptrace/bun"
	"github.com/uptrace/bun/dialect/sqlitedialect"
)

db := bun.NewDB(sqldb, sqlitedialect.New())

Now you are ready to issue some queries:

type User struct {
	ID   int64
	Name string
}

user := new(User)
err := db.NewSelect().
	Model(user).
	Where("name != ?", "").
	OrderExpr("id ASC").
	Limit(1).
	Scan(ctx)

Basic example

To provide initial data for our example, we will use Bun fixtures:

import "github.com/uptrace/bun/dbfixture"

// Register models for the fixture.
db.RegisterModel((*User)(nil), (*Story)(nil))

// WithRecreateTables tells Bun to drop existing tables and create new ones.
fixture := dbfixture.New(db, dbfixture.WithRecreateTables())

// Load fixture.yaml which contains data for User and Story models.
if err := fixture.Load(ctx, os.DirFS("."), "fixture.yaml"); err != nil {
	panic(err)
}

The fixture.yaml looks like this:

- model: User
  rows:
    - _id: admin
      name: admin
      emails: ['admin1@admin', 'admin2@admin']
    - _id: root
      name: root
      emails: ['root1@root', 'root2@root']

- model: Story
  rows:
    - title: Cool story
      author_id: '{{ $.User.admin.ID }}'

To select all users:

users := make([]User, 0)
if err := db.NewSelect().Model(&users).OrderExpr("id ASC").Scan(ctx); err != nil {
	panic(err)
}

To select a single user by id:

user1 := new(User)
if err := db.NewSelect().Model(user1).Where("id = ?", 1).Scan(ctx); err != nil {
	panic(err)
}

To select a story and the associated author in a single query:

story := new(Story)
if err := db.NewSelect().
	Model(story).
	Relation("Author").
	Limit(1).
	Scan(ctx); err != nil {
	panic(err)
}

To select a user into a map:

m := make(map[string]interface{})
if err := db.NewSelect().
	Model((*User)(nil)).
	Limit(1).
	Scan(ctx, &m); err != nil {
	panic(err)
}

To select all users scanning each column into a separate slice:

var ids []int64
var names []string
if err := db.NewSelect().
	ColumnExpr("id, name").
	Model((*User)(nil)).
	OrderExpr("id ASC").
	Scan(ctx, &ids, &names); err != nil {
	panic(err)
}

For more details, please consult docs and check examples.

Contributors

Thanks to all the people who already contributed!