Compare commits

...

31 commits
v0.1.2 ... main

Author SHA1 Message Date
Gabe Farrell
0ec7b458cc
ui: tweaks and fixes (#194)
* reduce min width of top chart on mobile

* adjust error page style

* adjust h1 line height
2026-02-04 13:41:12 -05:00
Gabe Farrell
531c72899c
fix: add null check for top charts bg gradient (#193) 2026-02-03 11:23:30 -05:00
Gabe Farrell
b06685c1af
fix: rewind navigation (#191) 2026-02-02 15:06:13 -05:00
Gabe Farrell
64236c99c9
fix: invalid json response when login gate is disabled (#184) 2026-01-26 14:49:30 -05:00
Gabe Farrell
42b32c7920
feat: add api key auth to web api (#183) 2026-01-26 13:48:43 -05:00
PythonGermany
bf1c03e9fd
docs: fix typo in index.mdx (#182) 2026-01-26 13:43:01 -05:00
Gabe Farrell
35e104c97e
fix: gradient background on top charts (#181) 2026-01-26 13:03:27 -05:00
Gabe Farrell
c8a11ef018
fix: ensure mbids in mbidmapping are discovered (#180) 2026-01-25 15:51:07 -05:00
Gabe Farrell
937f9062b5
fix: include time zone name overrides and add KOITO_FORCE_TZ cfg option (#176)
* timezone overrides and force_tz option

* docs for force_tz

* add link to time zone names in docs
2026-01-24 13:19:04 -05:00
Gabe Farrell
1ed055d098
fix: ui tweaks and fixes (#170)
* add subtle gradient to home page

* tweak autumn theme primary color

* reduce home page top margin on mobile

* use focus-active instead of focus for outline

* fix gradient on rewind page

* align checkbox on login form

* i forgot what the pseudo class was called
2026-01-22 21:31:14 -05:00
Gabe Farrell
08fc9eed86
fix: correct interest bucket queries (#169) 2026-01-22 17:01:46 -05:00
Gabe Farrell
cb4d177875
fix: release associations and add cleanup migration (#168)
* fix: release associations and add cleanup migration

* fix: incorrect test
2026-01-22 15:33:38 -05:00
Gabe Farrell
16cee8cfca
fix: speedup top-artists and top-albums queries (#167) 2026-01-21 17:30:59 -05:00
onespaceman
c59c6c3baa
QOL changes to client (#165) 2026-01-21 16:03:27 -05:00
Gabe Farrell
e7ba34710c
feat: lastfm image support (#166)
* feat: lastfm image support

* docs
2026-01-21 16:03:05 -05:00
Gabe Farrell
56ac73d12b
fix: improve subsonic image searching (#164) 2026-01-21 14:54:52 -05:00
Gabe Farrell
1a8099e902
feat: refetch missing images on startup (#160)
* artist image refetching

* album image refetching

* remove unused var
2026-01-20 12:10:54 -05:00
Gabe Farrell
5e294b839c
feat: all time rank display (#149)
* add all time rank to item pages

* fix artist albums component

* add no rows check

* fix rewind page
2026-01-16 01:03:23 -05:00
d08e05220f docs: add disclaimer about subsonic config 2026-01-15 22:01:25 -05:00
c0de721a7c chore: ignore README for docker workflow 2026-01-15 21:27:59 -05:00
Gabe Farrell
d2d6924e05
fix: use sql rank (#148) 2026-01-15 21:08:30 -05:00
Gabe Farrell
aa7fddd518
fix: a couple ui fixes (#147)
* fix: reduce loading component width

* improve theme selector for mobile

* match interest graph width to activity grid
2026-01-15 20:21:05 -05:00
Gabe Farrell
1eb1cd0fd5
chore: call relay early to prevent missed relays (#145)
* chore: call relay early to prevent missed relays

* fix: get current time in tz for listen activity (#146)

* fix: get current time in tz for listen activity

* fix: adjust test to prevent timezone errors
2026-01-15 19:40:38 -05:00
Gabe Farrell
92648167f0
fix: get current time in tz for listen activity (#146)
* fix: get current time in tz for listen activity

* fix: adjust test to prevent timezone errors
2026-01-15 19:36:48 -05:00
Gabe Farrell
9dbdfe5e41
update README 2026-01-15 18:21:51 -05:00
Gabe Farrell
94108953ec
fix: conditional rendering on artist and album pages (#140) 2026-01-14 22:12:57 -05:00
Gabe Farrell
d87ed2eb97
fix: ensure listen activity correctly sums listen activity in step (#139)
* remove impossible nil check

* fix listen activity not correctly aggregating step

* remove stray log

* fix test
2026-01-14 21:35:01 -05:00
Gabe Farrell
3305ad269e
Add Star History section to README
Added Star History section with visualization.
2026-01-14 17:21:52 -05:00
Gabe Farrell
20bbf62254
update README
Added logo and Ko-Fi badge to README.
2026-01-14 14:47:21 -05:00
Gabe Farrell
a94584da23
create FUNDING.yml 2026-01-14 14:06:14 -05:00
Gabe Farrell
8223a29be6
fix: correctly cycle tracks in backfill (#138) 2026-01-14 12:46:17 -05:00
93 changed files with 2503 additions and 1235 deletions

5
.env.example Normal file
View file

@ -0,0 +1,5 @@
KOITO_ALLOWED_HOSTS=*
KOITO_LOG_LEVEL=debug
KOITO_CONFIG_DIR=test_config_dir
KOITO_DATABASE_URL=postgres://postgres:secret@localhost:5432?sslmode=disable
TZ=Etc/UTC

3
.github/FUNDING.yml vendored Normal file
View file

@ -0,0 +1,3 @@
# These are supported funding model platforms
ko_fi: gabehf

View file

@ -17,6 +17,7 @@ on:
- main
paths-ignore:
- "docs/**"
- "README.md"
workflow_dispatch:

1
.gitignore vendored
View file

@ -1 +1,2 @@
test_config_dir
.env

View file

@ -1,3 +1,8 @@
ifneq (,$(wildcard ./.env))
include .env
export
endif
.PHONY: all test clean client
postgres.schemadump:
@ -28,10 +33,10 @@ postgres.remove-scratch:
docker stop koito-scratch && docker rm koito-scratch
api.debug: postgres.start
KOITO_ALLOWED_HOSTS=* KOITO_LOG_LEVEL=debug KOITO_CONFIG_DIR=test_config_dir KOITO_DATABASE_URL=postgres://postgres:secret@localhost:5432?sslmode=disable go run cmd/api/main.go
go run cmd/api/main.go
api.scratch: postgres.run-scratch
KOITO_ALLOWED_HOSTS=* KOITO_LOG_LEVEL=debug KOITO_CONFIG_DIR=test_config_dir/scratch KOITO_DATABASE_URL=postgres://postgres:secret@localhost:5433?sslmode=disable go run cmd/api/main.go
KOITO_DATABASE_URL=postgres://postgres:secret@localhost:5433?sslmode=disable go run cmd/api/main.go
api.test:
go test ./... -timeout 60s

View file

@ -1,4 +1,16 @@
# Koito
<div align="center">
![Koito logo](https://github.com/user-attachments/assets/bd69a050-b40f-4da7-8ff1-4607554bfd6d)
*Koito (小糸) is a Japanese surname. It is also homophonous with the words 恋と (koi to), meaning "and/with love".*
</div>
<div align="center">
[![Ko-Fi](https://img.shields.io/badge/Ko--fi-F16061?style=for-the-badge&logo=ko-fi&logoColor=white)](https://ko-fi.com/gabehf)
</div>
Koito is a modern, themeable ListenBrainz-compatible scrobbler for self-hosters who want control over their data and insights into their listening habits.
It supports relaying to other compatible scrobblers, so you can try it safely without replacing your current setup.
@ -76,6 +88,16 @@ There are currently some known issues that I am actively working on, in addition
If you have any feature ideas, open a GitHub issue to let me know. I'm sorting through ideas to decide which data visualizations and customization options to add next.
## Star History
<a href="https://www.star-history.com/#gabehf/koito&type=date&legend=top-left">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=gabehf/koito&type=date&theme=dark&legend=top-left" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=gabehf/koito&type=date&legend=top-left" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=gabehf/koito&type=date&legend=top-left" />
</picture>
</a>
## Albums that fueled development + notes
More relevant here than any of my other projects...

View file

@ -48,32 +48,32 @@ async function getLastListens(
async function getTopTracks(
args: getItemsArgs
): Promise<PaginatedResponse<Track>> {
): Promise<PaginatedResponse<Ranked<Track>>> {
let url = `/apis/web/v1/top-tracks?period=${args.period}&limit=${args.limit}&page=${args.page}`;
if (args.artist_id) url += `&artist_id=${args.artist_id}`;
else if (args.album_id) url += `&album_id=${args.album_id}`;
const r = await fetch(url);
return handleJson<PaginatedResponse<Track>>(r);
return handleJson<PaginatedResponse<Ranked<Track>>>(r);
}
async function getTopAlbums(
args: getItemsArgs
): Promise<PaginatedResponse<Album>> {
): Promise<PaginatedResponse<Ranked<Album>>> {
let url = `/apis/web/v1/top-albums?period=${args.period}&limit=${args.limit}&page=${args.page}`;
if (args.artist_id) url += `&artist_id=${args.artist_id}`;
const r = await fetch(url);
return handleJson<PaginatedResponse<Album>>(r);
return handleJson<PaginatedResponse<Ranked<Album>>>(r);
}
async function getTopArtists(
args: getItemsArgs
): Promise<PaginatedResponse<Artist>> {
): Promise<PaginatedResponse<Ranked<Artist>>> {
const url = `/apis/web/v1/top-artists?period=${args.period}&limit=${args.limit}&page=${args.page}`;
const r = await fetch(url);
return handleJson<PaginatedResponse<Artist>>(r);
return handleJson<PaginatedResponse<Ranked<Artist>>>(r);
}
async function getActivity(
@ -367,6 +367,7 @@ type Track = {
musicbrainz_id: string;
time_listened: number;
first_listen: number;
all_time_rank: number;
};
type Artist = {
id: number;
@ -378,6 +379,7 @@ type Artist = {
time_listened: number;
first_listen: number;
is_primary: boolean;
all_time_rank: number;
};
type Album = {
id: number;
@ -389,6 +391,7 @@ type Album = {
musicbrainz_id: string;
time_listened: number;
first_listen: number;
all_time_rank: number;
};
type Alias = {
id: number;
@ -407,6 +410,10 @@ type PaginatedResponse<T> = {
current_page: number;
items_per_page: number;
};
type Ranked<T> = {
item: T;
rank: number;
};
type ListenActivityItem = {
start_time: Date;
listens: number;
@ -455,9 +462,9 @@ type NowPlaying = {
};
type RewindStats = {
title: string;
top_artists: Artist[];
top_albums: Album[];
top_tracks: Track[];
top_artists: Ranked<Artist>[];
top_albums: Ranked<Album>[];
top_tracks: Ranked<Track>[];
minutes_listened: number;
avg_minutes_listened_per_day: number;
plays: number;
@ -480,6 +487,7 @@ export type {
Listen,
SearchResponse,
PaginatedResponse,
Ranked,
ListenActivityItem,
InterestBucket,
User,

View file

@ -58,6 +58,7 @@
--header-sm: 16px;
--header-xl-weight: 600;
--header-weight: 600;
--header-line-height: 3rem;
}
@media (min-width: 60rem) {
@ -68,6 +69,7 @@
--header-sm: 16px;
--header-xl-weight: 600;
--header-weight: 600;
--header-line-height: 1.3em;
}
}
@ -98,6 +100,7 @@ h1 {
font-family: "League Spartan";
font-weight: var(--header-weight);
font-size: var(--header-xl);
line-height: var(--header-line-height);
}
h2 {
font-family: "League Spartan";
@ -130,30 +133,21 @@ h4 {
text-decoration: underline;
}
input[type="text"] {
border: 1px solid var(--color-bg);
}
input[type="text"]:focus {
outline: none;
border: 1px solid var(--color-fg-tertiary);
}
input[type="text"],
input[type="password"],
textarea {
border: 1px solid var(--color-bg);
}
textarea:focus {
outline: none;
border: 1px solid var(--color-fg-tertiary);
input[type="checkbox"] {
height: fit-content;
}
input[type="password"] {
border: 1px solid var(--color-bg);
}
input[type="password"]:focus {
outline: none;
border: 1px solid var(--color-fg-tertiary);
}
input[type="checkbox"]:focus {
outline: none;
border: 1px solid var(--color-fg-tertiary);
input:focus-visible,
button:focus-visible,
a:focus-visible,
select:focus-visible,
textarea:focus-visible {
border-color: transparent;
outline: 2px solid var(--color-fg-tertiary);
}
button:hover {

View file

@ -68,14 +68,14 @@ export default function ActivityGrid({
if (isPending) {
return (
<div className="w-[500px]">
<div className="w-[350px]">
<h3>Activity</h3>
<p>Loading...</p>
</div>
);
} else if (isError) {
return (
<div className="w-[500px]">
<div className="w-[350px]">
<h3>Activity</h3>
<p className="error">Error: {error.message}</p>
</div>

View file

@ -11,7 +11,7 @@ export default function AllTimeStats() {
if (isPending) {
return (
<div className="w-[200px]">
<div>
<h3>{header}</h3>
<p>Loading...</p>
</div>

View file

@ -8,11 +8,11 @@ interface Props {
period: string;
}
export default function ArtistAlbums({ artistId, name, period }: Props) {
export default function ArtistAlbums({ artistId, name }: Props) {
const { isPending, isError, data, error } = useQuery({
queryKey: [
"top-albums",
{ limit: 99, period: "all_time", artist_id: artistId, page: 0 },
{ limit: 99, period: "all_time", artist_id: artistId },
],
queryFn: ({ queryKey }) => getTopAlbums(queryKey[1] as getItemsArgs),
});
@ -39,16 +39,20 @@ export default function ArtistAlbums({ artistId, name, period }: Props) {
<h3>Albums featuring {name}</h3>
<div className="flex flex-wrap gap-8">
{data.items.map((item) => (
<Link to={`/album/${item.id}`} className="flex gap-2 items-start">
<Link
to={`/album/${item.item.id}`}
className="flex gap-2 items-start"
>
<img
src={imageUrl(item.image, "medium")}
alt={item.title}
src={imageUrl(item.item.image, "medium")}
alt={item.item.title}
style={{ width: 130 }}
/>
<div className="w-[180px] flex flex-col items-start gap-1">
<p>{item.title}</p>
<p>{item.item.title}</p>
<p className="text-sm color-fg-secondary">
{item.listen_count} play{item.listen_count > 1 ? "s" : ""}
{item.item.listen_count} play
{item.item.listen_count > 1 ? "s" : ""}
</p>
</div>
</Link>

View file

@ -48,14 +48,14 @@ export default function InterestGraph({
if (isPending) {
return (
<div className="w-[500px]">
<div className="w-[350px] sm:w-[500px]">
<h3>Interest over time</h3>
<p>Loading...</p>
</div>
);
} else if (isError) {
return (
<div className="w-[500px]">
<div className="w-[350px] sm:w-[500px]">
<h3>Interest over time</h3>
<p className="error">Error: {error.message}</p>
</div>
@ -67,7 +67,7 @@ export default function InterestGraph({
// so I think I just have to remove it for now.
return (
<div className="flex flex-col items-start w-full max-w-[500px]">
<div className="flex flex-col items-start w-full max-w-[335px] sm:max-w-[500px]">
<h3>Interest over time</h3>
<AreaChart
style={{

View file

@ -6,11 +6,12 @@ import {
type Artist,
type Track,
type PaginatedResponse,
type Ranked,
} from "api/api";
type Item = Album | Track | Artist;
interface Props<T extends Item> {
interface Props<T extends Ranked<Item>> {
data: PaginatedResponse<T>;
separators?: ConstrainBoolean;
ranked?: boolean;
@ -18,33 +19,17 @@ interface Props<T extends Item> {
className?: string;
}
export default function TopItemList<T extends Item>({
export default function TopItemList<T extends Ranked<Item>>({
data,
separators,
type,
className,
ranked,
}: Props<T>) {
const currentParams = new URLSearchParams(location.search);
const page = Math.max(parseInt(currentParams.get("page") || "1"), 1);
let lastRank = 0;
const calculateRank = (data: Item[], page: number, index: number): number => {
if (
index === 0 ||
data[index] == undefined ||
!(data[index].listen_count === data[index - 1].listen_count)
) {
lastRank = index + 1 + (page - 1) * 100;
}
return lastRank;
};
return (
<div className={`flex flex-col gap-1 ${className} min-w-[200px]`}>
{data.items.map((item, index) => {
const key = `${type}-${item.id}`;
const key = `${type}-${item.item.id}`;
return (
<div
key={key}
@ -57,10 +42,10 @@ export default function TopItemList<T extends Item>({
>
<ItemCard
ranked={ranked}
rank={calculateRank(data.items, page, index)}
item={item}
rank={item.rank}
item={item.item}
type={type}
key={type + item.id}
key={type + item.item.id}
/>
</div>
);

View file

@ -20,7 +20,7 @@ export default function DeleteModal({ open, setOpen, title, id, type }: Props) {
setLoading(true);
deleteItem(type.toLowerCase(), id).then((r) => {
if (r.ok) {
navigate("/");
navigate(-1);
} else {
console.log(r);
}

View file

@ -54,7 +54,7 @@ export default function LoginForm() {
className="w-full mx-auto fg bg rounded p-2"
onChange={(e) => setPassword(e.target.value)}
/>
<div className="flex gap-2">
<div className="flex gap-2 items-center">
<input
type="checkbox"
name="koito-remember"

View file

@ -19,7 +19,7 @@ interface Props {
}
export default function MergeModal(props: Props) {
const [query, setQuery] = useState("");
const [query, setQuery] = useState(props.currentTitle);
const [data, setData] = useState<SearchResponse>();
const [debouncedQuery, setDebouncedQuery] = useState(query);
const [mergeTarget, setMergeTarget] = useState<{ title: string; id: number }>(
@ -101,11 +101,12 @@ export default function MergeModal(props: Props) {
<input
type="text"
autoFocus
defaultValue={props.currentTitle}
// i find my stupid a(n) logic to be a little silly so im leaving it in even if its not optimal
placeholder={`Search for a${
props.type.toLowerCase()[0] === "a" ? "n" : ""
} ${props.type.toLowerCase()} to be merged into the current ${props.type.toLowerCase()}`}
placeholder={`Search for a${props.type.toLowerCase()[0] === "a" ? "n" : ""
} ${props.type.toLowerCase()} to be merged into the current ${props.type.toLowerCase()}`}
className="w-full mx-auto fg bg rounded p-2"
onFocus={(e) => { setQuery(e.target.value); e.target.select()}}
onChange={(e) => setQuery(e.target.value)}
/>
<SearchResults selectorMode data={data} onSelect={toggleSelect} />
@ -128,7 +129,7 @@ export default function MergeModal(props: Props) {
>
Merge Items
</button>
<div className="flex gap-2 mt-3">
<div className="flex items-center gap-2 mt-3">
<input
type="checkbox"
name="reverse-merge-order"
@ -139,7 +140,7 @@ export default function MergeModal(props: Props) {
</div>
{(props.type.toLowerCase() === "album" ||
props.type.toLowerCase() === "artist") && (
<div className="flex gap-2 mt-3">
<div className="flex items-center gap-2 mt-3">
<input
type="checkbox"
name="replace-image"

View file

@ -32,10 +32,34 @@ export function Modal({
}
}, [isOpen, shouldRender]);
// Close on Escape key
// Handle keyboard events
useEffect(() => {
const handleKeyDown = (e: KeyboardEvent) => {
if (e.key === 'Escape') onClose();
// Close on Escape key
if (e.key === 'Escape') {
onClose()
// Trap tab navigation to the modal
} else if (e.key === 'Tab') {
if (modalRef.current) {
const focusableEls = modalRef.current.querySelectorAll<HTMLElement>(
'button:not(:disabled), [href], input:not(:disabled), select:not(:disabled), textarea:not(:disabled), [tabindex]:not([tabindex="-1"])'
);
const firstEl = focusableEls[0];
const lastEl = focusableEls[focusableEls.length - 1];
const activeEl = document.activeElement
if (e.shiftKey && activeEl === firstEl) {
e.preventDefault();
lastEl.focus();
} else if (!e.shiftKey && activeEl === lastEl) {
e.preventDefault();
firstEl.focus();
} else if (!Array.from(focusableEls).find(node => node.isEqualNode(activeEl))) {
e.preventDefault();
firstEl.focus();
}
}
};
};
if (isOpen) document.addEventListener('keydown', handleKeyDown);
return () => document.removeEventListener('keydown', handleKeyDown);
@ -70,13 +94,13 @@ export function Modal({
}`}
style={{ maxWidth: maxW ?? 600, height: h ?? '' }}
>
{children}
<button
onClick={onClose}
className="absolute top-2 right-2 color-fg-tertiary hover:cursor-pointer"
>
🞪
</button>
{children}
</div>
</div>,
document.body

View file

@ -8,9 +8,9 @@ interface Props {
}
export default function Rewind(props: Props) {
const artistimg = props.stats.top_artists[0]?.image;
const albumimg = props.stats.top_albums[0]?.image;
const trackimg = props.stats.top_tracks[0]?.image;
const artistimg = props.stats.top_artists[0]?.item.image;
const albumimg = props.stats.top_albums[0]?.item.image;
const trackimg = props.stats.top_tracks[0]?.item.image;
if (
!props.stats.top_artists[0] ||
!props.stats.top_albums[0] ||

View file

@ -1,7 +1,9 @@
import type { Ranked } from "api/api";
type TopItemProps<T> = {
title: string;
imageSrc: string;
items: T[];
items: Ranked<T>[];
getLabel: (item: T) => string;
includeTime?: boolean;
};
@ -28,23 +30,23 @@ export function RewindTopItem<
<div className="flex items-center gap-2">
<div className="flex flex-col items-start mb-2">
<h2>{getLabel(top)}</h2>
<h2>{getLabel(top.item)}</h2>
<span className="text-(--color-fg-tertiary) -mt-3 text-sm">
{`${top.listen_count} plays`}
{`${top.item.listen_count} plays`}
{includeTime
? ` (${Math.floor(top.time_listened / 60)} minutes)`
? ` (${Math.floor(top.item.time_listened / 60)} minutes)`
: ``}
</span>
</div>
</div>
{rest.map((e) => (
<div key={e.id} className="text-sm">
{getLabel(e)}
<div key={e.item.id} className="text-sm">
{getLabel(e.item)}
<span className="text-(--color-fg-tertiary)">
{` - ${e.listen_count} plays`}
{` - ${e.item.listen_count} plays`}
{includeTime
? ` (${Math.floor(e.time_listened / 60)} minutes)`
? ` (${Math.floor(e.item.time_listened / 60)} minutes)`
: ``}
</span>
</div>

View file

@ -1,23 +1,43 @@
import type { Theme } from "~/styles/themes.css";
interface Props {
theme: Theme
themeName: string
setTheme: Function
theme: Theme;
themeName: string;
setTheme: Function;
}
export default function ThemeOption({ theme, themeName, setTheme }: Props) {
const capitalizeFirstLetter = (s: string) => {
return s.charAt(0).toUpperCase() + s.slice(1);
};
const capitalizeFirstLetter = (s: string) => {
return s.charAt(0).toUpperCase() + s.slice(1);
}
return (
<div onClick={() => setTheme(themeName)} className="rounded-md p-3 sm:p-5 hover:cursor-pointer flex gap-4 items-center border-2" style={{background: theme.bg, color: theme.fg, borderColor: theme.bgSecondary}}>
<div className="text-xs sm:text-sm">{capitalizeFirstLetter(themeName)}</div>
<div className="w-[50px] h-[30px] rounded-md" style={{background: theme.bgSecondary}}></div>
<div className="w-[50px] h-[30px] rounded-md" style={{background: theme.fgSecondary}}></div>
<div className="w-[50px] h-[30px] rounded-md" style={{background: theme.primary}}></div>
</div>
)
}
return (
<div
onClick={() => setTheme(themeName)}
className="rounded-md p-3 sm:p-5 hover:cursor-pointer flex gap-3 items-center border-2 justify-between"
style={{
background: theme.bg,
color: theme.fg,
borderColor: theme.bgSecondary,
}}
>
<div className="text-xs sm:text-sm">
{capitalizeFirstLetter(themeName)}
</div>
<div className="flex gap-2 w-full">
<div
className="w-2/7 max-w-[50px] h-[30px] rounded-md"
style={{ background: theme.bgSecondary }}
></div>
<div
className="w-2/7 max-w-[50px] h-[30px] rounded-md"
style={{ background: theme.fgSecondary }}
></div>
<div
className="w-2/7 max-w-[50px] h-[30px] rounded-md"
style={{ background: theme.primary }}
></div>
</div>
</div>
);
}

View file

@ -49,7 +49,7 @@ export function ThemeSwitcher() {
<AsyncButton onClick={resetTheme}>Reset</AsyncButton>
</div>
</div>
<div className="grid grid-cols-2 items-center gap-2">
<div className="grid grid-cols-1 sm:grid-cols-2 items-center gap-2">
{Object.entries(themes).map(([name, themeData]) => (
<ThemeOption
setTheme={setTheme}

View file

@ -116,12 +116,12 @@ export function ErrorBoundary() {
<AppProvider>
<ThemeProvider>
<title>{title}</title>
<Sidebar />
<div className="flex">
<Sidebar />
<div className="w-full flex flex-col">
<main className="pt-16 p-4 container mx-auto flex-grow">
<div className="flex gap-4 items-end">
<img className="w-[200px] rounded" src="../yuu.jpg" />
<main className="pt-16 p-4 mx-auto flex-grow">
<div className="md:flex gap-4">
<img className="w-[200px] rounded mb-3" src="../yuu.jpg" />
<div>
<h1>{message}</h1>
<p>{details}</p>

View file

@ -1,7 +1,7 @@
import TopItemList from "~/components/TopItemList";
import ChartLayout from "./ChartLayout";
import { useLoaderData, type LoaderFunctionArgs } from "react-router";
import { type Album, type PaginatedResponse } from "api/api";
import { type Album, type PaginatedResponse, type Ranked } from "api/api";
export async function clientLoader({ request }: LoaderFunctionArgs) {
const url = new URL(request.url);
@ -21,7 +21,7 @@ export async function clientLoader({ request }: LoaderFunctionArgs) {
export default function AlbumChart() {
const { top_albums: initialData } = useLoaderData<{
top_albums: PaginatedResponse<Album>;
top_albums: PaginatedResponse<Ranked<Album>>;
}>();
return (
@ -30,7 +30,7 @@ export default function AlbumChart() {
initialData={initialData}
endpoint="chart/top-albums"
render={({ data, page, onNext, onPrev }) => (
<div className="flex flex-col gap-5">
<div className="flex flex-col gap-5 w-full">
<div className="flex gap-15 mx-auto">
<button className="default" onClick={onPrev} disabled={page <= 1}>
Prev
@ -47,7 +47,7 @@ export default function AlbumChart() {
ranked
separators
data={data}
className="w-[400px] sm:w-[600px]"
className="w-11/12 sm:w-[600px]"
type="album"
/>
<div className="flex gap-15 mx-auto">

View file

@ -1,7 +1,7 @@
import TopItemList from "~/components/TopItemList";
import ChartLayout from "./ChartLayout";
import { useLoaderData, type LoaderFunctionArgs } from "react-router";
import { type Album, type PaginatedResponse } from "api/api";
import { type Album, type PaginatedResponse, type Ranked } from "api/api";
export async function clientLoader({ request }: LoaderFunctionArgs) {
const url = new URL(request.url);
@ -21,7 +21,7 @@ export async function clientLoader({ request }: LoaderFunctionArgs) {
export default function Artist() {
const { top_artists: initialData } = useLoaderData<{
top_artists: PaginatedResponse<Album>;
top_artists: PaginatedResponse<Ranked<Album>>;
}>();
return (
@ -30,7 +30,7 @@ export default function Artist() {
initialData={initialData}
endpoint="chart/top-artists"
render={({ data, page, onNext, onPrev }) => (
<div className="flex flex-col gap-5">
<div className="flex flex-col gap-5 w-full">
<div className="flex gap-15 mx-auto">
<button className="default" onClick={onPrev} disabled={page <= 1}>
Prev
@ -47,7 +47,7 @@ export default function Artist() {
ranked
separators
data={data}
className="w-[400px] sm:w-[600px]"
className="w-11/12 sm:w-[600px]"
type="artist"
/>
<div className="flex gap-15 mx-auto">

View file

@ -40,7 +40,7 @@ export default function ChartLayout<T>({
useEffect(() => {
if ((data?.items?.length ?? 0) === 0) return;
const img = (data.items[0] as any)?.image;
const img = (data.items[0] as any)?.item?.image;
if (!img) return;
average(imageUrl(img, "small"), { amount: 1 }).then((color) => {

View file

@ -1,7 +1,7 @@
import TopItemList from "~/components/TopItemList";
import ChartLayout from "./ChartLayout";
import { useLoaderData, type LoaderFunctionArgs } from "react-router";
import { type Album, type PaginatedResponse } from "api/api";
import { type Track, type PaginatedResponse, type Ranked } from "api/api";
export async function clientLoader({ request }: LoaderFunctionArgs) {
const url = new URL(request.url);
@ -15,13 +15,13 @@ export async function clientLoader({ request }: LoaderFunctionArgs) {
throw new Response("Failed to load top tracks", { status: 500 });
}
const top_tracks: PaginatedResponse<Album> = await res.json();
const top_tracks: PaginatedResponse<Track> = await res.json();
return { top_tracks };
}
export default function TrackChart() {
const { top_tracks: initialData } = useLoaderData<{
top_tracks: PaginatedResponse<Album>;
top_tracks: PaginatedResponse<Ranked<Track>>;
}>();
return (
@ -30,7 +30,7 @@ export default function TrackChart() {
initialData={initialData}
endpoint="chart/top-tracks"
render={({ data, page, onNext, onPrev }) => (
<div className="flex flex-col gap-5">
<div className="flex flex-col gap-5 w-full">
<div className="flex gap-15 mx-auto">
<button className="default" onClick={onPrev} disabled={page <= 1}>
Prev
@ -47,7 +47,7 @@ export default function TrackChart() {
ranked
separators
data={data}
className="w-[400px] sm:w-[600px]"
className="w-11/12 sm:w-[600px]"
type="track"
/>
<div className="flex gap-15 mx-auto">

View file

@ -10,20 +10,17 @@ import PeriodSelector from "~/components/PeriodSelector";
import { useAppContext } from "~/providers/AppProvider";
export function meta({}: Route.MetaArgs) {
return [
{ title: "Koito" },
{ name: "description", content: "Koito" },
];
return [{ title: "Koito" }, { name: "description", content: "Koito" }];
}
export default function Home() {
const [period, setPeriod] = useState('week')
const [period, setPeriod] = useState("week");
const { homeItems } = useAppContext();
return (
<main className="flex flex-grow justify-center pb-4">
<div className="flex-1 flex flex-col items-center gap-16 min-h-0 mt-20">
<main className="flex flex-grow justify-center pb-4 w-full bg-linear-to-b to-(--color-bg) from-(--color-bg-secondary) to-60%">
<div className="flex-1 flex flex-col items-center gap-16 min-h-0 sm:mt-20 mt-10">
<div className="flex flex-col md:flex-row gap-10 md:gap-20">
<AllTimeStats />
<ActivityGrid configurable />
@ -33,7 +30,10 @@ export default function Home() {
<TopArtists period={period} limit={homeItems} />
<TopAlbums period={period} limit={homeItems} />
<TopTracks period={period} limit={homeItems} />
<LastPlays showNowPlaying={true} limit={Math.floor(homeItems * 2.7)} />
<LastPlays
showNowPlaying={true}
limit={Math.floor(homeItems * 2.7)}
/>
</div>
</div>
</main>

View file

@ -30,6 +30,7 @@ export default function Album() {
title={album.title}
img={album.image}
id={album.id}
rank={album.all_time_rank}
musicbrainzId={album.musicbrainz_id}
imgItemId={album.id}
mergeFunc={mergeAlbums}
@ -45,17 +46,17 @@ export default function Album() {
}}
subContent={
<div className="flex flex-col gap-2 items-start">
{album.listen_count && (
{album.listen_count !== 0 && (
<p>
{album.listen_count} play{album.listen_count > 1 ? "s" : ""}
</p>
)}
{album.time_listened && (
{album.time_listened !== 0 && (
<p title={Math.floor(album.time_listened / 60 / 60) + " hours"}>
{timeListenedString(album.time_listened)}
</p>
)}
{album.first_listen && (
{album.first_listen > 0 && (
<p title={new Date(album.first_listen * 1000).toLocaleString()}>
Listening since{" "}
{new Date(album.first_listen * 1000).toLocaleDateString()}

View file

@ -36,6 +36,7 @@ export default function Artist() {
title={artist.name}
img={artist.image}
id={artist.id}
rank={artist.all_time_rank}
musicbrainzId={artist.musicbrainz_id}
imgItemId={artist.id}
mergeFunc={mergeArtists}
@ -56,17 +57,17 @@ export default function Artist() {
{artist.listen_count} play{artist.listen_count > 1 ? "s" : ""}
</p>
)}
{
{artist.time_listened !== 0 && (
<p title={Math.floor(artist.time_listened / 60 / 60) + " hours"}>
{timeListenedString(artist.time_listened)}
</p>
}
{
)}
{artist.first_listen > 0 && (
<p title={new Date(artist.first_listen * 1000).toLocaleString()}>
Listening since{" "}
{new Date(artist.first_listen * 1000).toLocaleDateString()}
</p>
}
)}
</div>
}
>

View file

@ -28,6 +28,7 @@ interface Props {
title: string;
img: string;
id: number;
rank: number;
musicbrainzId: string;
imgItemId: number;
mergeFunc: MergeFunc;
@ -96,7 +97,15 @@ export default function MediaLayout(props: Props) {
</div>
<div className="flex flex-col items-start">
<h3>{props.type}</h3>
<h1>{props.title}</h1>
<div className="flex">
<h1>
{props.title}
<span className="text-xl font-medium text-(--color-fg-secondary)">
{" "}
#{props.rank}
</span>
</h1>
</div>
{props.subContent}
</div>
<div className="absolute left-1 sm:right-1 sm:left-auto -top-9 sm:top-1 flex gap-3 items-center">

View file

@ -34,6 +34,7 @@ export default function Track() {
title={track.title}
img={track.image}
id={track.id}
rank={track.all_time_rank}
musicbrainzId={track.musicbrainz_id}
imgItemId={track.album_id}
mergeFunc={mergeTracks}

View file

@ -29,10 +29,12 @@ const months = [
export async function clientLoader({ request }: LoaderFunctionArgs) {
const url = new URL(request.url);
const year =
parseInt(url.searchParams.get("year") || "0") || getRewindParams().year;
const month =
parseInt(url.searchParams.get("month") || "0") || getRewindParams().month;
const year = parseInt(
url.searchParams.get("year") || getRewindParams().year.toString()
);
const month = parseInt(
url.searchParams.get("month") || getRewindParams().month.toString()
);
const res = await fetch(`/apis/web/v1/summary?year=${year}&month=${month}`);
if (!res.ok) {
@ -46,10 +48,12 @@ export async function clientLoader({ request }: LoaderFunctionArgs) {
export default function RewindPage() {
const currentParams = new URLSearchParams(location.search);
let year =
parseInt(currentParams.get("year") || "0") || getRewindParams().year;
let month =
parseInt(currentParams.get("month") || "0") || getRewindParams().month;
let year = parseInt(
currentParams.get("year") || getRewindParams().year.toString()
);
let month = parseInt(
currentParams.get("month") || getRewindParams().month.toString()
);
const navigate = useNavigate();
const [showTime, setShowTime] = useState(false);
const { stats: stats } = useLoaderData<{ stats: RewindStats }>();
@ -59,7 +63,7 @@ export default function RewindPage() {
useEffect(() => {
if (!stats.top_artists[0]) return;
const img = (stats.top_artists[0] as any)?.image;
const img = (stats.top_artists[0] as any)?.item.image;
if (!img) return;
average(imageUrl(img, "small"), { amount: 1 }).then((color) => {
@ -73,10 +77,8 @@ export default function RewindPage() {
for (const key in params) {
const val = params[key];
if (val !== null && val !== "0") {
if (val !== null) {
nextParams.set(key, val);
} else {
nextParams.delete(key);
}
}
@ -99,6 +101,7 @@ export default function RewindPage() {
month -= 1;
}
}
console.log(`Month: ${month}`);
updateParams({
year: year.toString(),
@ -154,7 +157,12 @@ export default function RewindPage() {
<button
onClick={() => navigateMonth("next")}
className="p-2 disabled:text-(--color-fg-tertiary)"
disabled={new Date(year, month) > new Date()}
disabled={
// next month is current or future month and
month >= new Date().getMonth() &&
// we are looking at current (or future) year
year >= new Date().getFullYear()
}
>
<ChevronRight size={20} />
</button>

View file

@ -92,7 +92,7 @@ export const themes: Record<string, Theme> = {
fg: "#fef9f3",
fgSecondary: "#dbc6b0",
fgTertiary: "#a3917a",
primary: "#d97706",
primary: "#F0850A",
primaryDim: "#b45309",
accent: "#8c4c28",
accentDim: "#6b3b1f",

View file

@ -0,0 +1,9 @@
-- +goose Up
DELETE FROM artist_releases ar
WHERE NOT EXISTS (
SELECT 1
FROM artist_tracks at
JOIN tracks t ON at.track_id = t.id
WHERE at.artist_id = ar.artist_id
AND t.release_id = ar.release_id
);

View file

@ -56,22 +56,60 @@ LEFT JOIN artist_aliases aa ON a.id = aa.artist_id
WHERE a.musicbrainz_id = $1
GROUP BY a.id, a.musicbrainz_id, a.image, a.image_source, a.name;
-- name: GetArtistsWithoutImages :many
SELECT
*
FROM artists_with_name
WHERE image IS NULL
AND id > $2
ORDER BY id ASC
LIMIT $1;
-- name: GetTopArtistsPaginated :many
SELECT
x.id,
x.name,
x.musicbrainz_id,
x.image,
x.listen_count,
RANK() OVER (ORDER BY x.listen_count DESC) AS rank
FROM (
SELECT
a.id,
a.name,
a.musicbrainz_id,
a.image,
COUNT(*) AS listen_count
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN artist_tracks at ON at.track_id = t.id
JOIN artists_with_name a ON a.id = at.artist_id
WHERE l.listened_at BETWEEN $1 AND $2
GROUP BY a.id, a.name, a.musicbrainz_id, a.image, a.image_source, a.name
ORDER BY listen_count DESC, a.id
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN artist_tracks at ON at.track_id = t.id
JOIN artists_with_name a ON a.id = at.artist_id
WHERE l.listened_at BETWEEN $1 AND $2
GROUP BY a.id, a.name, a.musicbrainz_id, a.image
) x
ORDER BY x.listen_count DESC, x.id
LIMIT $3 OFFSET $4;
-- name: GetArtistAllTimeRank :one
SELECT
artist_id,
rank
FROM (
SELECT
x.artist_id,
RANK() OVER (ORDER BY x.listen_count DESC) AS rank
FROM (
SELECT
at.artist_id,
COUNT(*) AS listen_count
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN artist_tracks at ON t.id = at.track_id
GROUP BY at.artist_id
) x
)
WHERE artist_id = $1;
-- name: CountTopArtists :one
SELECT COUNT(DISTINCT at.artist_id) AS total_count
FROM listens l

View file

@ -3,7 +3,13 @@ DO $$
BEGIN
DELETE FROM tracks WHERE id NOT IN (SELECT l.track_id FROM listens l);
DELETE FROM releases WHERE id NOT IN (SELECT t.release_id FROM tracks t);
-- DELETE FROM releases WHERE release_group_id NOT IN (SELECT t.release_group_id FROM tracks t);
-- DELETE FROM releases WHERE release_group_id NOT IN (SELECT rg.id FROM release_groups rg);
DELETE FROM artists WHERE id NOT IN (SELECT at.artist_id FROM artist_tracks at);
DELETE FROM artist_releases ar
WHERE NOT EXISTS (
SELECT 1
FROM artist_tracks at
JOIN tracks t ON at.track_id = t.id
WHERE at.artist_id = ar.artist_id
AND t.release_id = ar.release_id
);
END $$;

View file

@ -1,162 +1,139 @@
-- name: GetGroupedListensFromArtist :many
WITH artist_listens AS (
WITH bounds AS (
SELECT
l.listened_at
MIN(l.listened_at) AS start_time,
NOW() AS end_time
FROM listens l
JOIN tracks t ON t.id = l.track_id
JOIN artist_tracks at ON at.track_id = t.id
WHERE at.artist_id = $1
),
bounds AS (
stats AS (
SELECT
MIN(listened_at) AS start_time,
MAX(listened_at) AS end_time
FROM artist_listens
start_time,
end_time,
EXTRACT(EPOCH FROM (end_time - start_time)) AS total_seconds,
((end_time - start_time) / sqlc.arg(bucket_count)::int) AS bucket_interval
FROM bounds
),
bucketed AS (
bucket_series AS (
SELECT generate_series(0, sqlc.arg(bucket_count)::int - 1) AS idx
),
listen_indices AS (
SELECT
LEAST(
sqlc.arg(bucket_count) - 1,
sqlc.arg(bucket_count)::int - 1,
FLOOR(
(
EXTRACT(EPOCH FROM (al.listened_at - b.start_time))
/
NULLIF(EXTRACT(EPOCH FROM (b.end_time - b.start_time)), 0)
) * sqlc.arg(bucket_count)
(EXTRACT(EPOCH FROM (l.listened_at - s.start_time)) / NULLIF(s.total_seconds, 0))
* sqlc.arg(bucket_count)::int
)::int
) AS bucket_idx,
b.start_time,
b.end_time
FROM artist_listens al
CROSS JOIN bounds b
),
aggregated AS (
SELECT
start_time
+ (
bucket_idx * (end_time - start_time)
/ sqlc.arg(bucket_count)
) AS bucket_start,
start_time
+ (
(bucket_idx + 1) * (end_time - start_time)
/ sqlc.arg(bucket_count)
) AS bucket_end,
COUNT(*) AS listen_count
FROM bucketed
GROUP BY bucket_idx, start_time, end_time
) AS bucket_idx
FROM listens l
JOIN tracks t ON t.id = l.track_id
JOIN artist_tracks at ON at.track_id = t.id
CROSS JOIN stats s
WHERE at.artist_id = $1
AND s.start_time IS NOT NULL
)
SELECT
bucket_start::timestamptz,
bucket_end::timestamptz,
listen_count
FROM aggregated
ORDER BY bucket_start;
(s.start_time + (s.bucket_interval * bs.idx))::timestamptz AS bucket_start,
(s.start_time + (s.bucket_interval * (bs.idx + 1)))::timestamptz AS bucket_end,
COUNT(li.bucket_idx) AS listen_count
FROM bucket_series bs
CROSS JOIN stats s
LEFT JOIN listen_indices li ON bs.idx = li.bucket_idx
WHERE s.start_time IS NOT NULL
GROUP BY bs.idx, s.start_time, s.bucket_interval
ORDER BY bs.idx;
-- name: GetGroupedListensFromRelease :many
WITH artist_listens AS (
WITH bounds AS (
SELECT
l.listened_at
MIN(l.listened_at) AS start_time,
NOW() AS end_time
FROM listens l
JOIN tracks t ON t.id = l.track_id
WHERE t.release_id = $1
),
bounds AS (
stats AS (
SELECT
MIN(listened_at) AS start_time,
MAX(listened_at) AS end_time
FROM artist_listens
start_time,
end_time,
EXTRACT(EPOCH FROM (end_time - start_time)) AS total_seconds,
((end_time - start_time) / sqlc.arg(bucket_count)::int) AS bucket_interval
FROM bounds
),
bucketed AS (
bucket_series AS (
SELECT generate_series(0, sqlc.arg(bucket_count)::int - 1) AS idx
),
listen_indices AS (
SELECT
LEAST(
sqlc.arg(bucket_count) - 1,
sqlc.arg(bucket_count)::int - 1,
FLOOR(
(
EXTRACT(EPOCH FROM (al.listened_at - b.start_time))
/
NULLIF(EXTRACT(EPOCH FROM (b.end_time - b.start_time)), 0)
) * sqlc.arg(bucket_count)
(EXTRACT(EPOCH FROM (l.listened_at - s.start_time)) / NULLIF(s.total_seconds, 0))
* sqlc.arg(bucket_count)::int
)::int
) AS bucket_idx,
b.start_time,
b.end_time
FROM artist_listens al
CROSS JOIN bounds b
),
aggregated AS (
SELECT
start_time
+ (
bucket_idx * (end_time - start_time)
/ sqlc.arg(bucket_count)
) AS bucket_start,
start_time
+ (
(bucket_idx + 1) * (end_time - start_time)
/ sqlc.arg(bucket_count)
) AS bucket_end,
COUNT(*) AS listen_count
FROM bucketed
GROUP BY bucket_idx, start_time, end_time
) AS bucket_idx
FROM listens l
JOIN tracks t ON t.id = l.track_id
CROSS JOIN stats s
WHERE t.release_id = $1
AND s.start_time IS NOT NULL
)
SELECT
bucket_start::timestamptz,
bucket_end::timestamptz,
listen_count
FROM aggregated
ORDER BY bucket_start;
(s.start_time + (s.bucket_interval * bs.idx))::timestamptz AS bucket_start,
(s.start_time + (s.bucket_interval * (bs.idx + 1)))::timestamptz AS bucket_end,
COUNT(li.bucket_idx) AS listen_count
FROM bucket_series bs
CROSS JOIN stats s
LEFT JOIN listen_indices li ON bs.idx = li.bucket_idx
WHERE s.start_time IS NOT NULL
GROUP BY bs.idx, s.start_time, s.bucket_interval
ORDER BY bs.idx;
-- name: GetGroupedListensFromTrack :many
WITH artist_listens AS (
WITH bounds AS (
SELECT
l.listened_at
MIN(l.listened_at) AS start_time,
NOW() AS end_time
FROM listens l
JOIN tracks t ON t.id = l.track_id
WHERE t.id = $1
),
bounds AS (
stats AS (
SELECT
MIN(listened_at) AS start_time,
MAX(listened_at) AS end_time
FROM artist_listens
start_time,
end_time,
EXTRACT(EPOCH FROM (end_time - start_time)) AS total_seconds,
((end_time - start_time) / sqlc.arg(bucket_count)::int) AS bucket_interval
FROM bounds
),
bucketed AS (
bucket_series AS (
SELECT generate_series(0, sqlc.arg(bucket_count)::int - 1) AS idx
),
listen_indices AS (
SELECT
LEAST(
sqlc.arg(bucket_count) - 1,
sqlc.arg(bucket_count)::int - 1,
FLOOR(
(
EXTRACT(EPOCH FROM (al.listened_at - b.start_time))
/
NULLIF(EXTRACT(EPOCH FROM (b.end_time - b.start_time)), 0)
) * sqlc.arg(bucket_count)
(EXTRACT(EPOCH FROM (l.listened_at - s.start_time)) / NULLIF(s.total_seconds, 0))
* sqlc.arg(bucket_count)::int
)::int
) AS bucket_idx,
b.start_time,
b.end_time
FROM artist_listens al
CROSS JOIN bounds b
),
aggregated AS (
SELECT
start_time
+ (
bucket_idx * (end_time - start_time)
/ sqlc.arg(bucket_count)
) AS bucket_start,
start_time
+ (
(bucket_idx + 1) * (end_time - start_time)
/ sqlc.arg(bucket_count)
) AS bucket_end,
COUNT(*) AS listen_count
FROM bucketed
GROUP BY bucket_idx, start_time, end_time
) AS bucket_idx
FROM listens l
JOIN tracks t ON t.id = l.track_id
CROSS JOIN stats s
WHERE t.id = $1
AND s.start_time IS NOT NULL
)
SELECT
bucket_start::timestamptz,
bucket_end::timestamptz,
listen_count
FROM aggregated
ORDER BY bucket_start;
(s.start_time + (s.bucket_interval * bs.idx))::timestamptz AS bucket_start,
(s.start_time + (s.bucket_interval * (bs.idx + 1)))::timestamptz AS bucket_end,
COUNT(li.bucket_idx) AS listen_count
FROM bucket_series bs
CROSS JOIN stats s
LEFT JOIN listen_indices li ON bs.idx = li.bucket_idx
WHERE s.start_time IS NOT NULL
GROUP BY bs.idx, s.start_time, s.bucket_interval
ORDER BY bs.idx;

View file

@ -47,32 +47,61 @@ WHERE r.title = ANY ($1::TEXT[])
-- name: GetTopReleasesFromArtist :many
SELECT
r.*,
COUNT(*) AS listen_count,
get_artists_for_release(r.id) AS artists
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN releases_with_title r ON t.release_id = r.id
JOIN artist_releases ar ON r.id = ar.release_id
WHERE ar.artist_id = $5
AND l.listened_at BETWEEN $1 AND $2
GROUP BY r.id, r.title, r.musicbrainz_id, r.various_artists, r.image, r.image_source
ORDER BY listen_count DESC, r.id
x.*,
get_artists_for_release(x.id) AS artists,
RANK() OVER (ORDER BY x.listen_count DESC) AS rank
FROM (
SELECT
r.*,
COUNT(*) AS listen_count
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN releases_with_title r ON t.release_id = r.id
JOIN artist_releases ar ON r.id = ar.release_id
WHERE ar.artist_id = $5
AND l.listened_at BETWEEN $1 AND $2
GROUP BY r.id, r.title, r.musicbrainz_id, r.various_artists, r.image, r.image_source
) x
ORDER BY listen_count DESC, x.id
LIMIT $3 OFFSET $4;
-- name: GetTopReleasesPaginated :many
SELECT
r.*,
COUNT(*) AS listen_count,
get_artists_for_release(r.id) AS artists
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN releases_with_title r ON t.release_id = r.id
WHERE l.listened_at BETWEEN $1 AND $2
GROUP BY r.id, r.title, r.musicbrainz_id, r.various_artists, r.image, r.image_source
ORDER BY listen_count DESC, r.id
x.*,
get_artists_for_release(x.id) AS artists,
RANK() OVER (ORDER BY x.listen_count DESC) AS rank
FROM (
SELECT
r.*,
COUNT(*) AS listen_count
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN releases_with_title r ON t.release_id = r.id
WHERE l.listened_at BETWEEN $1 AND $2
GROUP BY r.id, r.title, r.musicbrainz_id, r.various_artists, r.image, r.image_source
) x
ORDER BY listen_count DESC, x.id
LIMIT $3 OFFSET $4;
-- name: GetReleaseAllTimeRank :one
SELECT
release_id,
rank
FROM (
SELECT
x.release_id,
RANK() OVER (ORDER BY x.listen_count DESC) AS rank
FROM (
SELECT
t.release_id,
COUNT(*) AS listen_count
FROM listens l
JOIN tracks t ON l.track_id = t.id
GROUP BY t.release_id
) x
)
WHERE release_id = $1;
-- name: CountTopReleases :one
SELECT COUNT(DISTINCT r.id) AS total_count
FROM listens l

View file

@ -39,57 +39,100 @@ HAVING COUNT(DISTINCT at.artist_id) = cardinality($3::int[]);
-- name: GetTopTracksPaginated :many
SELECT
t.id,
x.track_id AS id,
t.title,
t.musicbrainz_id,
t.release_id,
r.image,
COUNT(*) AS listen_count,
get_artists_for_track(t.id) AS artists
FROM listens l
JOIN tracks_with_title t ON l.track_id = t.id
x.listen_count,
get_artists_for_track(x.track_id) AS artists,
x.rank
FROM (
SELECT
track_id,
COUNT(*) AS listen_count,
RANK() OVER (ORDER BY COUNT(*) DESC) as rank
FROM listens
WHERE listened_at BETWEEN $1 AND $2
GROUP BY track_id
ORDER BY listen_count DESC
LIMIT $3 OFFSET $4
) x
JOIN tracks_with_title t ON x.track_id = t.id
JOIN releases r ON t.release_id = r.id
WHERE l.listened_at BETWEEN $1 AND $2
GROUP BY t.id, t.title, t.musicbrainz_id, t.release_id, r.image
ORDER BY listen_count DESC, t.id
LIMIT $3 OFFSET $4;
ORDER BY x.listen_count DESC, x.track_id;
-- name: GetTopTracksByArtistPaginated :many
SELECT
t.id,
x.track_id AS id,
t.title,
t.musicbrainz_id,
t.release_id,
r.image,
COUNT(*) AS listen_count,
get_artists_for_track(t.id) AS artists
FROM listens l
JOIN tracks_with_title t ON l.track_id = t.id
x.listen_count,
get_artists_for_track(x.track_id) AS artists,
x.rank
FROM (
SELECT
l.track_id,
COUNT(*) AS listen_count,
RANK() OVER (ORDER BY COUNT(*) DESC) as rank
FROM listens l
JOIN artist_tracks at ON l.track_id = at.track_id
WHERE l.listened_at BETWEEN $1 AND $2
AND at.artist_id = $5
GROUP BY l.track_id
ORDER BY listen_count DESC
LIMIT $3 OFFSET $4
) x
JOIN tracks_with_title t ON x.track_id = t.id
JOIN releases r ON t.release_id = r.id
JOIN artist_tracks at ON at.track_id = t.id
WHERE l.listened_at BETWEEN $1 AND $2
AND at.artist_id = $5
GROUP BY t.id, t.title, t.musicbrainz_id, t.release_id, r.image
ORDER BY listen_count DESC, t.id
LIMIT $3 OFFSET $4;
ORDER BY x.listen_count DESC, x.track_id;
-- name: GetTopTracksInReleasePaginated :many
SELECT
t.id,
x.track_id AS id,
t.title,
t.musicbrainz_id,
t.release_id,
r.image,
COUNT(*) AS listen_count,
get_artists_for_track(t.id) AS artists
FROM listens l
JOIN tracks_with_title t ON l.track_id = t.id
x.listen_count,
get_artists_for_track(x.track_id) AS artists,
x.rank
FROM (
SELECT
l.track_id,
COUNT(*) AS listen_count,
RANK() OVER (ORDER BY COUNT(*) DESC) as rank
FROM listens l
JOIN tracks t ON l.track_id = t.id
WHERE l.listened_at BETWEEN $1 AND $2
AND t.release_id = $5
GROUP BY l.track_id
ORDER BY listen_count DESC
LIMIT $3 OFFSET $4
) x
JOIN tracks_with_title t ON x.track_id = t.id
JOIN releases r ON t.release_id = r.id
WHERE l.listened_at BETWEEN $1 AND $2
AND t.release_id = $5
GROUP BY t.id, t.title, t.musicbrainz_id, t.release_id, r.image
ORDER BY listen_count DESC, t.id
LIMIT $3 OFFSET $4;
ORDER BY x.listen_count DESC, x.track_id;
-- name: GetTrackAllTimeRank :one
SELECT
id,
rank
FROM (
SELECT
x.id,
RANK() OVER (ORDER BY x.listen_count DESC) AS rank
FROM (
SELECT
t.id,
COUNT(*) AS listen_count
FROM listens l
JOIN tracks_with_title t ON l.track_id = t.id
GROUP BY t.id) x
) y
WHERE id = $1;
-- name: CountTopTracks :one
SELECT COUNT(DISTINCT l.track_id) AS total_count

View file

@ -28,7 +28,7 @@ import { Card, CardGrid } from '@astrojs/starlight/components';
Koito can be connected to any music server or client that allows for custom ListenBrainz URLs.
</Card>
<Card title="Scrobbler relay" icon="rocket">
Automatically relay listens submitted to your Koito instance to other ListenBrainz compatble servers.
Automatically relay listens submitted to your Koito instance to other ListenBrainz compatible servers.
</Card>
<Card title="Automatic data fetching" icon="download">
Koito automatically fetches data from MusicBrainz and images from Deezer and Cover Art Archive to compliment what is provided by your music server.

View file

@ -64,6 +64,8 @@ If the environment variable is defined without **and** with the suffix at the sa
##### KOITO_CONFIG_DIR
- Default: `/etc/koito`
- Description: The location where import folders and image caches are stored.
##### KOITO_FORCE_TZ
- Description: A canonical IANA database time zone name (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) that Koito will use to serve all clients. Overrides any timezones requested via a `tz` cookie or `tz` query parameter. Koito will fail to start if this value is invalid.
##### KOITO_DISABLE_DEEZER
- Default: `false`
- Description: Disables Deezer as a source for finding artist and album images.
@ -78,6 +80,13 @@ If the environment variable is defined without **and** with the suffix at the sa
##### KOITO_SUBSONIC_PARAMS
- Required: `true` if KOITO_SUBSONIC_URL is set
- Description: The `u`, `t`, and `s` authentication parameters to use for authenticated requests to your subsonic server, in the format `u=XXX&t=XXX&s=XXX`. An easy way to find them is to open the network tab in the developer tools of your browser of choice and copy them from a request.
:::caution
If Koito is unable to validate your Subsonic configuration, it will fail to start. If you notice your container isn't running after
changing these parameters, check the logs!
:::
##### KOITO_LASTFM_API_KEY
- Required: `false`
- Description: Your LastFM API key, which will be used for fetching images if provided. You can get an API key [here](https://www.last.fm/api/authentication),
##### KOITO_SKIP_IMPORT
- Default: `false`
- Description: Skips running the importer on startup.

View file

@ -96,6 +96,10 @@ func Run(
defer store.Close(ctx)
l.Info().Msg("Engine: Database connection established")
if cfg.ForceTZ() != nil {
l.Debug().Msgf("Engine: Forcing the use of timezone '%s'", cfg.ForceTZ().String())
}
l.Debug().Msg("Engine: Initializing MusicBrainz client")
var mbzC mbz.MusicBrainzCaller
if !cfg.MusicBrainzDisabled() {
@ -138,6 +142,7 @@ func Run(
EnableCAA: !cfg.CoverArtArchiveDisabled(),
EnableDeezer: !cfg.DeezerDisabled(),
EnableSubsonic: cfg.SubsonicEnabled(),
EnableLastFM: cfg.LastFMApiKey() != "",
})
l.Info().Msg("Engine: Image sources initialized")
@ -211,6 +216,8 @@ func Run(
}
}()
l.Info().Msg("Engine: Beginning startup tasks...")
l.Debug().Msg("Engine: Checking import configuration")
if !cfg.SkipImport() {
go func() {
@ -218,18 +225,14 @@ func Run(
}()
}
// l.Info().Msg("Creating test export file")
// go func() {
// err := export.ExportData(ctx, "koito", store)
// if err != nil {
// l.Err(err).Msg("Failed to generate export file")
// }
// }()
l.Info().Msg("Engine: Pruning orphaned images")
go catalog.PruneOrphanedImages(logger.NewContext(l), store)
l.Info().Msg("Engine: Running duration backfill task")
go catalog.BackfillTrackDurationsFromMusicBrainz(ctx, store, mbzC)
l.Info().Msg("Engine: Attempting to fetch missing artist images")
go catalog.FetchMissingArtistImages(ctx, store)
l.Info().Msg("Engine: Attempting to fetch missing album images")
go catalog.FetchMissingAlbumImages(ctx, store)
l.Info().Msg("Engine: Initialization finished")
quit := make(chan os.Signal, 1)

View file

@ -106,7 +106,7 @@ func GetListenActivityHandler(store db.DB) func(w http.ResponseWriter, r *http.R
return
}
activity = fillMissingActivity(activity, opts)
activity = processActivity(activity, opts)
l.Debug().Msg("GetListenActivityHandler: Successfully retrieved listen activity")
utils.WriteJSON(w, http.StatusOK, activity)
@ -114,34 +114,55 @@ func GetListenActivityHandler(store db.DB) func(w http.ResponseWriter, r *http.R
}
// ngl i hate this
func fillMissingActivity(
func processActivity(
items []db.ListenActivityItem,
opts db.ListenActivityOpts,
) []db.ListenActivityItem {
from, to := db.ListenActivityOptsToTimes(opts)
existing := make(map[string]int64, len(items))
buckets := make(map[string]int64)
for _, item := range items {
existing[item.Start.Format("2006-01-02")] = item.Listens
bucketStart := normalizeToStep(item.Start, opts.Step)
key := bucketStart.Format("2006-01-02")
buckets[key] += item.Listens
}
var result []db.ListenActivityItem
for t := from; t.Before(to); t = addStep(t, opts.Step) {
listens := int64(0)
if v, ok := existing[t.Format("2006-01-02")]; ok {
listens = v
}
for t := normalizeToStep(from, opts.Step); t.Before(to); t = addStep(t, opts.Step) {
key := t.Format("2006-01-02")
result = append(result, db.ListenActivityItem{
Start: t,
Listens: int64(listens),
Listens: buckets[key],
})
}
return result
}
func normalizeToStep(t time.Time, step db.StepInterval) time.Time {
switch step {
case db.StepDay:
return time.Date(t.Year(), t.Month(), t.Day(), 0, 0, 0, 0, t.Location())
case db.StepWeek:
weekday := int(t.Weekday())
if weekday == 0 {
weekday = 7
}
start := t.AddDate(0, 0, -(weekday - 1))
return time.Date(start.Year(), start.Month(), start.Day(), 0, 0, 0, 0, t.Location())
case db.StepMonth:
return time.Date(t.Year(), t.Month(), 1, 0, 0, 0, 0, t.Location())
default:
return t
}
}
func addStep(t time.Time, step db.StepInterval) time.Time {
switch step {
case db.StepDay:

View file

@ -6,7 +6,9 @@ import (
"strconv"
"strings"
"time"
_ "time/tzdata"
"github.com/gabehf/koito/internal/cfg"
"github.com/gabehf/koito/internal/db"
"github.com/gabehf/koito/internal/logger"
)
@ -107,14 +109,143 @@ func TimeframeFromRequest(r *http.Request) db.Timeframe {
func parseTZ(r *http.Request) *time.Location {
// this map is obviously AI.
// i manually referenced as many links as I could and couldn't find any
// incorrect entries here so hopefully it is all correct.
overrides := map[string]string{
// --- North America ---
"America/Indianapolis": "America/Indiana/Indianapolis",
"America/Knoxville": "America/Indiana/Knoxville",
"America/Louisville": "America/Kentucky/Louisville",
"America/Montreal": "America/Toronto",
"America/Shiprock": "America/Denver",
"America/Fort_Wayne": "America/Indiana/Indianapolis",
"America/Virgin": "America/Port_of_Spain",
"America/Santa_Isabel": "America/Tijuana",
"America/Ensenada": "America/Tijuana",
"America/Rosario": "America/Argentina/Cordoba",
"America/Jujuy": "America/Argentina/Jujuy",
"America/Mendoza": "America/Argentina/Mendoza",
"America/Catamarca": "America/Argentina/Catamarca",
"America/Cordoba": "America/Argentina/Cordoba",
"America/Buenos_Aires": "America/Argentina/Buenos_Aires",
"America/Coral_Harbour": "America/Atikokan",
"America/Atka": "America/Adak",
"US/Alaska": "America/Anchorage",
"US/Aleutian": "America/Adak",
"US/Arizona": "America/Phoenix",
"US/Central": "America/Chicago",
"US/Eastern": "America/New_York",
"US/East-Indiana": "America/Indiana/Indianapolis",
"US/Hawaii": "Pacific/Honolulu",
"US/Indiana-Starke": "America/Indiana/Knoxville",
"US/Michigan": "America/Detroit",
"US/Mountain": "America/Denver",
"US/Pacific": "America/Los_Angeles",
"US/Samoa": "Pacific/Pago_Pago",
"Canada/Atlantic": "America/Halifax",
"Canada/Central": "America/Winnipeg",
"Canada/Eastern": "America/Toronto",
"Canada/Mountain": "America/Edmonton",
"Canada/Newfoundland": "America/St_Johns",
"Canada/Pacific": "America/Vancouver",
// --- Asia ---
"Asia/Calcutta": "Asia/Kolkata",
"Asia/Saigon": "Asia/Ho_Chi_Minh",
"Asia/Katmandu": "Asia/Kathmandu",
"Asia/Rangoon": "Asia/Yangon",
"Asia/Ulan_Bator": "Asia/Ulaanbaatar",
"Asia/Macao": "Asia/Macau",
"Asia/Tel_Aviv": "Asia/Jerusalem",
"Asia/Ashkhabad": "Asia/Ashgabat",
"Asia/Chungking": "Asia/Chongqing",
"Asia/Dacca": "Asia/Dhaka",
"Asia/Istanbul": "Europe/Istanbul",
"Asia/Kashgar": "Asia/Urumqi",
"Asia/Thimbu": "Asia/Thimphu",
"Asia/Ujung_Pandang": "Asia/Makassar",
"ROC": "Asia/Taipei",
"Iran": "Asia/Tehran",
"Israel": "Asia/Jerusalem",
"Japan": "Asia/Tokyo",
"Singapore": "Asia/Singapore",
"Hongkong": "Asia/Hong_Kong",
// --- Europe ---
"Europe/Kiev": "Europe/Kyiv",
"Europe/Belfast": "Europe/London",
"Europe/Tiraspol": "Europe/Chisinau",
"Europe/Nicosia": "Asia/Nicosia",
"Europe/Moscow": "Europe/Moscow",
"W-SU": "Europe/Moscow",
"GB": "Europe/London",
"GB-Eire": "Europe/London",
"Eire": "Europe/Dublin",
"Poland": "Europe/Warsaw",
"Portugal": "Europe/Lisbon",
"Turkey": "Europe/Istanbul",
// --- Australia / Pacific ---
"Australia/ACT": "Australia/Sydney",
"Australia/Canberra": "Australia/Sydney",
"Australia/LHI": "Australia/Lord_Howe",
"Australia/North": "Australia/Darwin",
"Australia/NSW": "Australia/Sydney",
"Australia/Queensland": "Australia/Brisbane",
"Australia/South": "Australia/Adelaide",
"Australia/Tasmania": "Australia/Hobart",
"Australia/Victoria": "Australia/Melbourne",
"Australia/West": "Australia/Perth",
"Australia/Yancowinna": "Australia/Broken_Hill",
"Pacific/Samoa": "Pacific/Pago_Pago",
"Pacific/Yap": "Pacific/Chuuk",
"Pacific/Truk": "Pacific/Chuuk",
"Pacific/Ponape": "Pacific/Pohnpei",
"NZ": "Pacific/Auckland",
"NZ-CHAT": "Pacific/Chatham",
// --- Africa ---
"Africa/Asmera": "Africa/Asmara",
"Africa/Timbuktu": "Africa/Bamako",
"Egypt": "Africa/Cairo",
"Libya": "Africa/Tripoli",
// --- Atlantic ---
"Atlantic/Faeroe": "Atlantic/Faroe",
"Atlantic/Jan_Mayen": "Europe/Oslo",
"Iceland": "Atlantic/Reykjavik",
// --- Etc / Misc ---
"UTC": "UTC",
"Etc/UTC": "UTC",
"Etc/GMT": "UTC",
"GMT": "UTC",
"Zulu": "UTC",
"Universal": "UTC",
}
if cfg.ForceTZ() != nil {
return cfg.ForceTZ()
}
if tz := r.URL.Query().Get("tz"); tz != "" {
if fixedTz, exists := overrides[tz]; exists {
tz = fixedTz
}
if loc, err := time.LoadLocation(tz); err == nil {
return loc
}
}
if c, err := r.Cookie("tz"); err == nil {
if loc, err := time.LoadLocation(c.Value); err == nil {
var tz string
if fixedTz, exists := overrides[c.Value]; exists {
tz = fixedTz
} else {
tz = c.Value
}
if loc, err := time.LoadLocation(tz); err == nil {
return loc
}
}

View file

@ -90,6 +90,11 @@ func LbzSubmitListenHandler(store db.DB, mbzc mbz.MusicBrainzCaller) func(w http
utils.WriteError(w, "failed to read request body", http.StatusBadRequest)
return
}
if cfg.LbzRelayEnabled() {
go doLbzRelay(requestBytes, l)
}
if err := json.NewDecoder(bytes.NewBuffer(requestBytes)).Decode(&req); err != nil {
l.Err(err).Msg("LbzSubmitListenHandler: Failed to decode request")
utils.WriteError(w, "failed to decode request", http.StatusBadRequest)
@ -234,10 +239,6 @@ func LbzSubmitListenHandler(store db.DB, mbzc mbz.MusicBrainzCaller) func(w http
w.WriteHeader(http.StatusOK)
w.Header().Set("Content-Type", "application/json")
w.Write([]byte("{\"status\": \"ok\"}"))
if cfg.LbzRelayEnabled() {
go doLbzRelay(requestBytes, l)
}
}
}

View file

@ -9,6 +9,7 @@ import (
"github.com/gabehf/koito/internal/catalog"
"github.com/gabehf/koito/internal/cfg"
"github.com/gabehf/koito/internal/db"
"github.com/gabehf/koito/internal/images"
"github.com/gabehf/koito/internal/logger"
"github.com/gabehf/koito/internal/utils"
"github.com/google/uuid"
@ -75,7 +76,7 @@ func ReplaceImageHandler(store db.DB) http.HandlerFunc {
fileUrl := r.FormValue("image_url")
if fileUrl != "" {
l.Debug().Msg("ReplaceImageHandler: Image identified as remote file")
err = catalog.ValidateImageURL(fileUrl)
err = images.ValidateImageURL(fileUrl)
if err != nil {
l.Debug().AnErr("error", err).Msg("ReplaceImageHandler: Invalid image URL")
utils.WriteError(w, "url is invalid or not an image file", http.StatusBadRequest)

View file

@ -264,6 +264,34 @@ func TestImportListenBrainz_MbzDisabled(t *testing.T) {
truncateTestData(t)
}
func TestImportListenBrainz_MBIDMapping(t *testing.T) {
src := path.Join("..", "test_assets", "listenbrainz_shoko1_123456789.zip")
destDir := filepath.Join(cfg.ConfigDir(), "import")
dest := filepath.Join(destDir, "listenbrainz_shoko1_123456789.zip")
// not going to make the dest dir because engine should make it already
input, err := os.ReadFile(src)
require.NoError(t, err)
require.NoError(t, os.WriteFile(dest, input, os.ModePerm))
engine.RunImporter(logger.Get(), store, &mbz.MbzErrorCaller{})
album, err := store.GetAlbum(context.Background(), db.GetAlbumOpts{MusicBrainzID: uuid.MustParse("177ebc28-0115-3897-8eb3-ebf74ce23790")})
require.NoError(t, err)
assert.Equal(t, "Zombie", album.Title)
artist, err := store.GetArtist(context.Background(), db.GetArtistOpts{MusicBrainzID: uuid.MustParse("c98d40fd-f6cf-4b26-883e-eaa515ee2851")})
require.NoError(t, err)
assert.Equal(t, "The Cranberries", artist.Name)
track, err := store.GetTrack(context.Background(), db.GetTrackOpts{MusicBrainzID: uuid.MustParse("3bbeb4e3-ab6d-460d-bfc5-de49e4251061")})
require.NoError(t, err)
assert.Equal(t, "Zombie", track.Title)
truncateTestData(t)
}
func TestImportKoito(t *testing.T) {
src := path.Join("..", "test_assets", "koito_export_test.json")
@ -276,6 +304,7 @@ func TestImportKoito(t *testing.T) {
giriReleaseMBID := uuid.MustParse("ac1f8da0-21d7-426e-83b0-befff06f0871")
suzukiMBID := uuid.MustParse("30f851bb-dba3-4e9b-811c-5f27f595c86a")
nijinoTrackMBID := uuid.MustParse("a4f26836-3894-46c1-acac-227808308687")
lp3MBID := uuid.MustParse("d0ec30bd-7cdc-417c-979d-5a0631b8a161")
input, err := os.ReadFile(src)
require.NoError(t, err)
@ -312,6 +341,12 @@ func TestImportKoito(t *testing.T) {
aliases, err := store.GetAllAlbumAliases(ctx, album.ID)
require.NoError(t, err)
assert.Contains(t, utils.FlattenAliases(aliases), "Nijinoiroyo Azayakadeare (NELKE ver.)")
// ensure album associations are saved
album, err = store.GetAlbum(ctx, db.GetAlbumOpts{MusicBrainzID: lp3MBID})
require.NoError(t, err)
assert.Contains(t, utils.FlattenSimpleArtistNames(album.Artists), "Elizabeth Powell")
assert.Contains(t, utils.FlattenSimpleArtistNames(album.Artists), "Rachel Goswell")
assert.Contains(t, utils.FlattenSimpleArtistNames(album.Artists), "American Football")
// ensure all tracks are saved
track, err := store.GetTrack(ctx, db.GetTrackOpts{MusicBrainzID: nijinoTrackMBID})

View file

@ -356,6 +356,51 @@ func TestDelete(t *testing.T) {
truncateTestData(t)
}
func TestLoginGate(t *testing.T) {
t.Run("Submit Listens", doSubmitListens)
req, err := http.NewRequest("DELETE", host()+"/apis/web/v1/artist?id=1", nil)
require.NoError(t, err)
req.Header.Add("Authorization", "Token "+apikey)
resp, err := http.DefaultClient.Do(req)
assert.NoError(t, err)
assert.Equal(t, 204, resp.StatusCode)
req, err = http.NewRequest("GET", host()+"/apis/web/v1/artist?id=3", nil)
require.NoError(t, err)
resp, err = http.DefaultClient.Do(req)
assert.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
var artist models.Artist
err = json.NewDecoder(resp.Body).Decode(&artist)
require.NoError(t, err)
assert.Equal(t, "ネクライトーキー", artist.Name)
cfg.SetLoginGate(true)
req, err = http.NewRequest("GET", host()+"/apis/web/v1/artist?id=3", nil)
require.NoError(t, err)
// req.Header.Add("Authorization", "Token "+apikey)
resp, err = http.DefaultClient.Do(req)
assert.NoError(t, err)
assert.Equal(t, 401, resp.StatusCode)
req, err = http.NewRequest("GET", host()+"/apis/web/v1/artist?id=3", nil)
require.NoError(t, err)
req.Header.Add("Authorization", "Token "+apikey)
resp, err = http.DefaultClient.Do(req)
assert.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
err = json.NewDecoder(resp.Body).Decode(&artist)
require.NoError(t, err)
assert.Equal(t, "ネクライトーキー", artist.Name)
cfg.SetLoginGate(false)
truncateTestData(t)
}
func TestAliasesAndSearch(t *testing.T) {
t.Run("Submit Listens", doSubmitListens)

View file

@ -0,0 +1,166 @@
package middleware
import (
"context"
"errors"
"fmt"
"net/http"
"strings"
"time"
"github.com/gabehf/koito/internal/cfg"
"github.com/gabehf/koito/internal/db"
"github.com/gabehf/koito/internal/logger"
"github.com/gabehf/koito/internal/models"
"github.com/gabehf/koito/internal/utils"
"github.com/google/uuid"
)
type MiddlwareContextKey string
const (
UserContextKey MiddlwareContextKey = "user"
apikeyContextKey MiddlwareContextKey = "apikeyID"
)
type AuthMode int
const (
AuthModeSessionCookie AuthMode = iota
AuthModeAPIKey
AuthModeSessionOrAPIKey
AuthModeLoginGate
)
func Authenticate(store db.DB, mode AuthMode) func(http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
l := logger.FromContext(ctx)
var user *models.User
var err error
switch mode {
case AuthModeSessionCookie:
user, err = validateSession(ctx, store, r)
case AuthModeAPIKey:
user, err = validateAPIKey(ctx, store, r)
case AuthModeSessionOrAPIKey:
user, err = validateSession(ctx, store, r)
if err != nil || user == nil {
user, err = validateAPIKey(ctx, store, r)
}
case AuthModeLoginGate:
if cfg.LoginGate() {
user, err = validateSession(ctx, store, r)
if err != nil || user == nil {
user, err = validateAPIKey(ctx, store, r)
}
} else {
next.ServeHTTP(w, r)
return
}
}
if err != nil {
l.Err(err).Msg("authentication failed")
utils.WriteError(w, "unauthorized", http.StatusUnauthorized)
return
}
if user == nil {
utils.WriteError(w, "unauthorized", http.StatusUnauthorized)
return
}
ctx = context.WithValue(ctx, UserContextKey, user)
r = r.WithContext(ctx)
next.ServeHTTP(w, r)
})
}
}
func validateSession(ctx context.Context, store db.DB, r *http.Request) (*models.User, error) {
l := logger.FromContext(r.Context())
l.Debug().Msgf("ValidateSession: Checking user authentication via session cookie")
cookie, err := r.Cookie("koito_session")
var sid uuid.UUID
if err == nil {
sid, err = uuid.Parse(cookie.Value)
if err != nil {
l.Err(err).Msg("ValidateSession: Could not parse UUID from session cookie")
return nil, errors.New("session cookie is invalid")
}
} else {
l.Debug().Msgf("ValidateSession: No session cookie found; attempting API key authentication")
return nil, errors.New("session cookie is missing")
}
l.Debug().Msg("ValidateSession: Retrieved login cookie from request")
u, err := store.GetUserBySession(r.Context(), sid)
if err != nil {
l.Err(fmt.Errorf("ValidateSession: %w", err)).Msg("Error accessing database")
return nil, errors.New("internal server error")
}
if u == nil {
l.Debug().Msg("ValidateSession: No user with session id found")
return nil, errors.New("no user with session id found")
}
ctx = context.WithValue(r.Context(), UserContextKey, u)
r = r.WithContext(ctx)
l.Debug().Msgf("ValidateSession: Refreshing session for user '%s'", u.Username)
store.RefreshSession(r.Context(), sid, time.Now().Add(30*24*time.Hour))
l.Debug().Msgf("ValidateSession: Refreshed session for user '%s'", u.Username)
return u, nil
}
func validateAPIKey(ctx context.Context, store db.DB, r *http.Request) (*models.User, error) {
l := logger.FromContext(ctx)
l.Debug().Msg("ValidateApiKey: Checking if user is already authenticated")
authH := r.Header.Get("Authorization")
var token string
if strings.HasPrefix(strings.ToLower(authH), "token ") {
token = strings.TrimSpace(authH[6:]) // strip "Token "
} else {
l.Error().Msg("ValidateApiKey: Authorization header must be formatted 'Token {token}'")
return nil, errors.New("authorization header is invalid")
}
u, err := store.GetUserByApiKey(ctx, token)
if err != nil {
l.Err(err).Msg("ValidateApiKey: Failed to get user from database using api key")
return nil, errors.New("internal server error")
}
if u == nil {
l.Debug().Msg("ValidateApiKey: API key does not exist")
return nil, errors.New("authorization token is invalid")
}
ctx = context.WithValue(r.Context(), UserContextKey, u)
r = r.WithContext(ctx)
return u, nil
}
func GetUserFromContext(ctx context.Context) *models.User {
user, ok := ctx.Value(UserContextKey).(*models.User)
if !ok {
return nil
}
return user
}

View file

@ -1,125 +0,0 @@
package middleware
import (
"context"
"fmt"
"net/http"
"strings"
"time"
"github.com/gabehf/koito/internal/db"
"github.com/gabehf/koito/internal/logger"
"github.com/gabehf/koito/internal/models"
"github.com/gabehf/koito/internal/utils"
"github.com/google/uuid"
)
type MiddlwareContextKey string
const (
UserContextKey MiddlwareContextKey = "user"
apikeyContextKey MiddlwareContextKey = "apikeyID"
)
func ValidateSession(store db.DB) func(next http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
l := logger.FromContext(r.Context())
l.Debug().Msgf("ValidateSession: Checking user authentication via session cookie")
cookie, err := r.Cookie("koito_session")
var sid uuid.UUID
if err == nil {
sid, err = uuid.Parse(cookie.Value)
if err != nil {
l.Err(err).Msg("ValidateSession: Could not parse UUID from session cookie")
utils.WriteError(w, "session cookie is invalid", http.StatusUnauthorized)
return
}
} else {
l.Debug().Msgf("ValidateSession: No session cookie found; attempting API key authentication")
utils.WriteError(w, "session cookie is missing", http.StatusUnauthorized)
return
}
l.Debug().Msg("ValidateSession: Retrieved login cookie from request")
u, err := store.GetUserBySession(r.Context(), sid)
if err != nil {
l.Err(fmt.Errorf("ValidateSession: %w", err)).Msg("Error accessing database")
utils.WriteError(w, "internal server error", http.StatusInternalServerError)
return
}
if u == nil {
l.Debug().Msg("ValidateSession: No user with session id found")
utils.WriteError(w, "unauthorized", http.StatusUnauthorized)
return
}
ctx := context.WithValue(r.Context(), UserContextKey, u)
r = r.WithContext(ctx)
l.Debug().Msgf("ValidateSession: Refreshing session for user '%s'", u.Username)
store.RefreshSession(r.Context(), sid, time.Now().Add(30*24*time.Hour))
l.Debug().Msgf("ValidateSession: Refreshed session for user '%s'", u.Username)
next.ServeHTTP(w, r)
})
}
}
func ValidateApiKey(store db.DB) func(next http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
l := logger.FromContext(ctx)
l.Debug().Msg("ValidateApiKey: Checking if user is already authenticated")
u := GetUserFromContext(ctx)
if u != nil {
l.Debug().Msg("ValidateApiKey: User is already authenticated; skipping API key authentication")
next.ServeHTTP(w, r)
return
}
authh := r.Header.Get("Authorization")
var token string
if strings.HasPrefix(strings.ToLower(authh), "token ") {
token = strings.TrimSpace(authh[6:]) // strip "Token "
} else {
l.Error().Msg("ValidateApiKey: Authorization header must be formatted 'Token {token}'")
utils.WriteError(w, "unauthorized", http.StatusUnauthorized)
return
}
u, err := store.GetUserByApiKey(ctx, token)
if err != nil {
l.Err(err).Msg("Failed to get user from database using api key")
utils.WriteError(w, "internal server error", http.StatusInternalServerError)
return
}
if u == nil {
l.Debug().Msg("Api key does not exist")
utils.WriteError(w, "unauthorized", http.StatusUnauthorized)
return
}
ctx = context.WithValue(r.Context(), UserContextKey, u)
r = r.WithContext(ctx)
next.ServeHTTP(w, r)
})
}
}
func GetUserFromContext(ctx context.Context) *models.User {
user, ok := ctx.Value(UserContextKey).(*models.User)
if !ok {
return nil
}
return user
}

View file

@ -38,9 +38,7 @@ func bindRoutes(
r.Get("/config", handlers.GetCfgHandler())
r.Group(func(r chi.Router) {
if cfg.LoginGate() {
r.Use(middleware.ValidateSession(db))
}
r.Use(middleware.Authenticate(db, middleware.AuthModeLoginGate))
r.Get("/artist", handlers.GetArtistHandler(db))
r.Get("/artists", handlers.GetArtistsForItemHandler(db))
r.Get("/album", handlers.GetAlbumHandler(db))
@ -79,7 +77,7 @@ func bindRoutes(
})
r.Group(func(r chi.Router) {
r.Use(middleware.ValidateSession(db))
r.Use(middleware.Authenticate(db, middleware.AuthModeSessionOrAPIKey))
r.Get("/export", handlers.ExportHandler(db))
r.Post("/replace-image", handlers.ReplaceImageHandler(db))
r.Patch("/album", handlers.UpdateAlbumHandler(db))
@ -111,8 +109,10 @@ func bindRoutes(
AllowedHeaders: []string{"Content-Type", "Authorization"},
}))
r.With(middleware.ValidateApiKey(db)).Post("/submit-listens", handlers.LbzSubmitListenHandler(db, mbz))
r.With(middleware.ValidateApiKey(db)).Get("/validate-token", handlers.LbzValidateTokenHandler(db))
r.With(middleware.Authenticate(db, middleware.AuthModeAPIKey)).
Post("/submit-listens", handlers.LbzSubmitListenHandler(db, mbz))
r.With(middleware.Authenticate(db, middleware.AuthModeAPIKey)).
Get("/validate-token", handlers.LbzValidateTokenHandler(db))
})
// serve react client

View file

@ -74,9 +74,6 @@ func matchTrackByMbzID(ctx context.Context, d db.DB, opts AssociateTrackOpts) (*
} else {
l.Warn().Msgf("Attempted to update track %s with MusicBrainz ID, but an existing ID was already found", track.Title)
}
if err != nil {
return nil, fmt.Errorf("matchTrackByMbzID: %w", err)
}
track.MbzID = &opts.TrackMbzID
return track, nil
}

View file

@ -21,6 +21,7 @@ func BackfillTrackDurationsFromMusicBrainz(
var from int32 = 0
for {
l.Debug().Int32("ID", from).Msg("Fetching tracks to backfill from ID")
tracks, err := store.GetTracksWithNoDurationButHaveMbzID(ctx, from)
if err != nil {
return fmt.Errorf("BackfillTrackDurationsFromMusicBrainz: failed to fetch tracks for duration backfill: %w", err)

View file

@ -0,0 +1,36 @@
package catalog_test
import (
"context"
"testing"
"github.com/gabehf/koito/internal/catalog"
"github.com/gabehf/koito/internal/mbz"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestBackfillDuration(t *testing.T) {
setupTestDataWithMbzIDs(t)
ctx := context.Background()
mbzc := &mbz.MbzMockCaller{
Artists: mbzArtistData,
Releases: mbzReleaseData,
Tracks: mbzTrackData,
}
var err error
err = catalog.BackfillTrackDurationsFromMusicBrainz(context.Background(), store, &mbz.MbzErrorCaller{})
assert.NoError(t, err)
err = catalog.BackfillTrackDurationsFromMusicBrainz(ctx, store, mbzc)
assert.NoError(t, err)
count, err := store.Count(ctx, `
SELECT COUNT(*) FROM tracks_with_title WHERE title = $1 AND duration > 0
`, "Tokyo Calling")
require.NoError(t, err)
assert.Equal(t, 1, count, "track was not updated with duration")
}

View file

@ -13,7 +13,9 @@ import (
"github.com/gabehf/koito/internal/cfg"
"github.com/gabehf/koito/internal/db"
"github.com/gabehf/koito/internal/images"
"github.com/gabehf/koito/internal/logger"
"github.com/gabehf/koito/internal/utils"
"github.com/google/uuid"
"github.com/h2non/bimg"
)
@ -78,30 +80,10 @@ func SourceImageDir() string {
}
}
// ValidateImageURL checks if the URL points to a valid image by performing a HEAD request.
func ValidateImageURL(url string) error {
resp, err := http.Head(url)
if err != nil {
return fmt.Errorf("ValidateImageURL: http.Head: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("ValidateImageURL: HEAD request failed, status code: %d", resp.StatusCode)
}
contentType := resp.Header.Get("Content-Type")
if !strings.HasPrefix(contentType, "image/") {
return fmt.Errorf("ValidateImageURL: URL does not point to an image, content type: %s", contentType)
}
return nil
}
// DownloadAndCacheImage downloads an image from the given URL, then calls CompressAndSaveImage.
func DownloadAndCacheImage(ctx context.Context, id uuid.UUID, url string, size ImageSize) error {
l := logger.FromContext(ctx)
err := ValidateImageURL(url)
err := images.ValidateImageURL(url)
if err != nil {
return fmt.Errorf("DownloadAndCacheImage: %w", err)
}
@ -285,3 +267,127 @@ func pruneDirImgs(ctx context.Context, store db.DB, path string, memo map[string
}
return count, nil
}
func FetchMissingArtistImages(ctx context.Context, store db.DB) error {
l := logger.FromContext(ctx)
l.Info().Msg("FetchMissingArtistImages: Starting backfill of missing artist images")
var from int32 = 0
for {
l.Debug().Int32("ID", from).Msg("Fetching artist images to backfill from ID")
artists, err := store.ArtistsWithoutImages(ctx, from)
if err != nil {
return fmt.Errorf("FetchMissingArtistImages: failed to fetch artists for image backfill: %w", err)
}
if len(artists) == 0 {
if from == 0 {
l.Info().Msg("FetchMissingArtistImages: No artists with missing images found")
} else {
l.Info().Msg("FetchMissingArtistImages: Finished fetching missing artist images")
}
return nil
}
for _, artist := range artists {
from = artist.ID
l.Debug().
Str("title", artist.Name).
Msg("FetchMissingArtistImages: Attempting to fetch missing artist image")
var aliases []string
if aliasrow, err := store.GetAllArtistAliases(ctx, artist.ID); err != nil {
aliases = utils.FlattenAliases(aliasrow)
} else {
aliases = []string{artist.Name}
}
var imgid uuid.UUID
imgUrl, imgErr := images.GetArtistImage(ctx, images.ArtistImageOpts{
Aliases: aliases,
})
if imgErr == nil && imgUrl != "" {
imgid = uuid.New()
err = store.UpdateArtist(ctx, db.UpdateArtistOpts{
ID: artist.ID,
Image: imgid,
ImageSrc: imgUrl,
})
if err != nil {
l.Err(err).
Str("title", artist.Name).
Msg("FetchMissingArtistImages: Failed to update artist with image in database")
continue
}
l.Info().
Str("name", artist.Name).
Msg("FetchMissingArtistImages: Successfully fetched missing artist image")
} else {
l.Err(err).
Str("name", artist.Name).
Msg("FetchMissingArtistImages: Failed to fetch artist image")
}
}
}
}
func FetchMissingAlbumImages(ctx context.Context, store db.DB) error {
l := logger.FromContext(ctx)
l.Info().Msg("FetchMissingAlbumImages: Starting backfill of missing album images")
var from int32 = 0
for {
l.Debug().Int32("ID", from).Msg("Fetching album images to backfill from ID")
albums, err := store.AlbumsWithoutImages(ctx, from)
if err != nil {
return fmt.Errorf("FetchMissingAlbumImages: failed to fetch albums for image backfill: %w", err)
}
if len(albums) == 0 {
if from == 0 {
l.Info().Msg("FetchMissingAlbumImages: No albums with missing images found")
} else {
l.Info().Msg("FetchMissingAlbumImages: Finished fetching missing album images")
}
return nil
}
for _, album := range albums {
from = album.ID
l.Debug().
Str("title", album.Title).
Msg("FetchMissingAlbumImages: Attempting to fetch missing album image")
var imgid uuid.UUID
imgUrl, imgErr := images.GetAlbumImage(ctx, images.AlbumImageOpts{
Artists: utils.FlattenSimpleArtistNames(album.Artists),
Album: album.Title,
ReleaseMbzID: album.MbzID,
})
if imgErr == nil && imgUrl != "" {
imgid = uuid.New()
err = store.UpdateAlbum(ctx, db.UpdateAlbumOpts{
ID: album.ID,
Image: imgid,
ImageSrc: imgUrl,
})
if err != nil {
l.Err(err).
Str("title", album.Title).
Msg("FetchMissingAlbumImages: Failed to update album with image in database")
continue
}
l.Info().
Str("name", album.Title).
Msg("FetchMissingAlbumImages: Successfully fetched missing album image")
} else {
l.Err(err).
Str("name", album.Title).
Msg("FetchMissingAlbumImages: Failed to fetch album image")
}
}
}
}

View file

@ -38,6 +38,7 @@ const (
DISABLE_MUSICBRAINZ_ENV = "KOITO_DISABLE_MUSICBRAINZ"
SUBSONIC_URL_ENV = "KOITO_SUBSONIC_URL"
SUBSONIC_PARAMS_ENV = "KOITO_SUBSONIC_PARAMS"
LASTFM_API_KEY_ENV = "KOITO_LASTFM_API_KEY"
SKIP_IMPORT_ENV = "KOITO_SKIP_IMPORT"
ALLOWED_HOSTS_ENV = "KOITO_ALLOWED_HOSTS"
CORS_ORIGINS_ENV = "KOITO_CORS_ALLOWED_ORIGINS"
@ -48,6 +49,7 @@ const (
FETCH_IMAGES_DURING_IMPORT_ENV = "KOITO_FETCH_IMAGES_DURING_IMPORT"
ARTIST_SEPARATORS_ENV = "KOITO_ARTIST_SEPARATORS_REGEX"
LOGIN_GATE_ENV = "KOITO_LOGIN_GATE"
FORCE_TZ = "KOITO_FORCE_TZ"
)
type config struct {
@ -72,6 +74,7 @@ type config struct {
disableMusicBrainz bool
subsonicUrl string
subsonicParams string
lastfmApiKey string
subsonicEnabled bool
skipImport bool
fetchImageDuringImport bool
@ -85,6 +88,7 @@ type config struct {
importAfter time.Time
artistSeparators []*regexp.Regexp
loginGate bool
forceTZ *time.Location
}
var (
@ -165,6 +169,7 @@ func loadConfig(getenv func(string) string, version string) (*config, error) {
if cfg.subsonicEnabled && (cfg.subsonicUrl == "" || cfg.subsonicParams == "") {
return nil, fmt.Errorf("loadConfig: invalid configuration: both %s and %s must be set in order to use subsonic image fetching", SUBSONIC_URL_ENV, SUBSONIC_PARAMS_ENV)
}
cfg.lastfmApiKey = getenv(LASTFM_API_KEY_ENV)
cfg.skipImport = parseBool(getenv(SKIP_IMPORT_ENV))
cfg.userAgent = fmt.Sprintf("Koito %s (contact@koito.io)", version)
@ -210,6 +215,13 @@ func loadConfig(getenv func(string) string, version string) (*config, error) {
cfg.loginGate = true
}
if getenv(FORCE_TZ) != "" {
cfg.forceTZ, err = time.LoadLocation(getenv(FORCE_TZ))
if err != nil {
return nil, fmt.Errorf("forced timezone '%s' is not a valid timezone", getenv(FORCE_TZ))
}
}
switch strings.ToLower(getenv(LOG_LEVEL_ENV)) {
case "debug":
cfg.logLevel = 0
@ -232,192 +244,3 @@ func parseBool(s string) bool {
return false
}
}
// Global accessors for configuration values
func UserAgent() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.userAgent
}
func ListenAddr() string {
lock.RLock()
defer lock.RUnlock()
return fmt.Sprintf("%s:%d", globalConfig.bindAddr, globalConfig.listenPort)
}
func ConfigDir() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.configDir
}
func DatabaseUrl() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.databaseUrl
}
func MusicBrainzUrl() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.musicBrainzUrl
}
func MusicBrainzRateLimit() int {
lock.RLock()
defer lock.RUnlock()
return globalConfig.musicBrainzRateLimit
}
func LogLevel() int {
lock.RLock()
defer lock.RUnlock()
return globalConfig.logLevel
}
func StructuredLogging() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.structuredLogging
}
func LbzRelayEnabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.lbzRelayEnabled
}
func LbzRelayUrl() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.lbzRelayUrl
}
func LbzRelayToken() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.lbzRelayToken
}
func DefaultPassword() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.defaultPw
}
func DefaultUsername() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.defaultUsername
}
func DefaultTheme() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.defaultTheme
}
func FullImageCacheEnabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.enableFullImageCache
}
func DeezerDisabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.disableDeezer
}
func CoverArtArchiveDisabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.disableCAA
}
func MusicBrainzDisabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.disableMusicBrainz
}
func SubsonicEnabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.subsonicEnabled
}
func SubsonicUrl() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.subsonicUrl
}
func SubsonicParams() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.subsonicParams
}
func SkipImport() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.skipImport
}
func AllowedHosts() []string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.allowedHosts
}
func AllowAllHosts() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.allowAllHosts
}
func AllowedOrigins() []string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.allowedOrigins
}
func RateLimitDisabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.disableRateLimit
}
func ThrottleImportMs() int {
lock.RLock()
defer lock.RUnlock()
return globalConfig.importThrottleMs
}
// returns the before, after times, in that order
func ImportWindow() (time.Time, time.Time) {
lock.RLock()
defer lock.RUnlock()
return globalConfig.importBefore, globalConfig.importAfter
}
func FetchImagesDuringImport() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.fetchImageDuringImport
}
func ArtistSeparators() []*regexp.Regexp {
lock.RLock()
defer lock.RUnlock()
return globalConfig.artistSeparators
}
func LoginGate() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.loginGate
}

206
internal/cfg/getters.go Normal file
View file

@ -0,0 +1,206 @@
package cfg
import (
"fmt"
"regexp"
"time"
)
func UserAgent() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.userAgent
}
func ListenAddr() string {
lock.RLock()
defer lock.RUnlock()
return fmt.Sprintf("%s:%d", globalConfig.bindAddr, globalConfig.listenPort)
}
func ConfigDir() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.configDir
}
func DatabaseUrl() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.databaseUrl
}
func MusicBrainzUrl() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.musicBrainzUrl
}
func MusicBrainzRateLimit() int {
lock.RLock()
defer lock.RUnlock()
return globalConfig.musicBrainzRateLimit
}
func LogLevel() int {
lock.RLock()
defer lock.RUnlock()
return globalConfig.logLevel
}
func StructuredLogging() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.structuredLogging
}
func LbzRelayEnabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.lbzRelayEnabled
}
func LbzRelayUrl() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.lbzRelayUrl
}
func LbzRelayToken() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.lbzRelayToken
}
func DefaultPassword() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.defaultPw
}
func DefaultUsername() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.defaultUsername
}
func DefaultTheme() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.defaultTheme
}
func FullImageCacheEnabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.enableFullImageCache
}
func DeezerDisabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.disableDeezer
}
func CoverArtArchiveDisabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.disableCAA
}
func MusicBrainzDisabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.disableMusicBrainz
}
func SubsonicEnabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.subsonicEnabled
}
func SubsonicUrl() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.subsonicUrl
}
func SubsonicParams() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.subsonicParams
}
func LastFMApiKey() string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.lastfmApiKey
}
func SkipImport() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.skipImport
}
func AllowedHosts() []string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.allowedHosts
}
func AllowAllHosts() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.allowAllHosts
}
func AllowedOrigins() []string {
lock.RLock()
defer lock.RUnlock()
return globalConfig.allowedOrigins
}
func RateLimitDisabled() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.disableRateLimit
}
func ThrottleImportMs() int {
lock.RLock()
defer lock.RUnlock()
return globalConfig.importThrottleMs
}
// returns the before, after times, in that order
func ImportWindow() (time.Time, time.Time) {
lock.RLock()
defer lock.RUnlock()
return globalConfig.importBefore, globalConfig.importAfter
}
func FetchImagesDuringImport() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.fetchImageDuringImport
}
func ArtistSeparators() []*regexp.Regexp {
lock.RLock()
defer lock.RUnlock()
return globalConfig.artistSeparators
}
func LoginGate() bool {
lock.RLock()
defer lock.RUnlock()
return globalConfig.loginGate
}
func ForceTZ() *time.Location {
lock.RLock()
defer lock.RUnlock()
return globalConfig.forceTZ
}

7
internal/cfg/setters.go Normal file
View file

@ -0,0 +1,7 @@
package cfg
func SetLoginGate(val bool) {
lock.Lock()
defer lock.Unlock()
globalConfig.loginGate = val
}

View file

@ -19,9 +19,9 @@ type DB interface {
GetTracksWithNoDurationButHaveMbzID(ctx context.Context, from int32) ([]*models.Track, error)
GetArtistsForAlbum(ctx context.Context, id int32) ([]*models.Artist, error)
GetArtistsForTrack(ctx context.Context, id int32) ([]*models.Artist, error)
GetTopTracksPaginated(ctx context.Context, opts GetItemsOpts) (*PaginatedResponse[*models.Track], error)
GetTopArtistsPaginated(ctx context.Context, opts GetItemsOpts) (*PaginatedResponse[*models.Artist], error)
GetTopAlbumsPaginated(ctx context.Context, opts GetItemsOpts) (*PaginatedResponse[*models.Album], error)
GetTopTracksPaginated(ctx context.Context, opts GetItemsOpts) (*PaginatedResponse[RankedItem[*models.Track]], error)
GetTopArtistsPaginated(ctx context.Context, opts GetItemsOpts) (*PaginatedResponse[RankedItem[*models.Artist]], error)
GetTopAlbumsPaginated(ctx context.Context, opts GetItemsOpts) (*PaginatedResponse[RankedItem[*models.Album]], error)
GetListensPaginated(ctx context.Context, opts GetItemsOpts) (*PaginatedResponse[*models.Listen], error)
GetListenActivity(ctx context.Context, opts ListenActivityOpts) ([]ListenActivityItem, error)
GetAllArtistAliases(ctx context.Context, id int32) ([]models.Alias, error)
@ -88,6 +88,7 @@ type DB interface {
// in seconds
CountTimeListenedToItem(ctx context.Context, opts TimeListenedOpts) (int64, error)
CountUsers(ctx context.Context) (int64, error)
// Search
SearchArtists(ctx context.Context, q string) ([]*models.Artist, error)
@ -105,6 +106,7 @@ type DB interface {
ImageHasAssociation(ctx context.Context, image uuid.UUID) (bool, error)
GetImageSource(ctx context.Context, image uuid.UUID) (string, error)
AlbumsWithoutImages(ctx context.Context, from int32) ([]*models.Album, error)
ArtistsWithoutImages(ctx context.Context, from int32) ([]*models.Artist, error)
GetExportPage(ctx context.Context, opts GetExportPageOpts) ([]*ExportItem, error)
Ping(ctx context.Context) error
Close(ctx context.Context)

View file

@ -57,11 +57,11 @@ const (
// and end will be 23:59:59 on Saturday at the end of the current week.
// If opts.Year (or opts.Year + opts.Month) is provided, start and end will simply by the start and end times of that year/month.
func ListenActivityOptsToTimes(opts ListenActivityOpts) (start, end time.Time) {
now := time.Now()
loc := opts.Timezone
if loc == nil {
loc, _ = time.LoadLocation("UTC")
}
now := time.Now().In(loc)
// If Year (and optionally Month) are specified, use calendar boundaries
if opts.Year != 0 {
@ -91,7 +91,9 @@ func ListenActivityOptsToTimes(opts ListenActivityOpts) (start, end time.Time) {
// Align to most recent Sunday
weekday := int(now.Weekday()) // Sunday = 0
startOfThisWeek := time.Date(now.Year(), now.Month(), now.Day()-weekday, 0, 0, 0, 0, loc)
start = startOfThisWeek.AddDate(0, 0, -7*opts.Range)
// need to subtract 1 from range for week because we are going back from the beginning of this
// week, so we sort of already went back a week
start = startOfThisWeek.AddDate(0, 0, -7*(opts.Range-1))
end = startOfThisWeek.AddDate(0, 0, 7).Add(-time.Nanosecond)
case StepMonth:

View file

@ -23,32 +23,13 @@ func (d *Psql) GetAlbum(ctx context.Context, opts db.GetAlbumOpts) (*models.Albu
var err error
var ret = new(models.Album)
if opts.ID != 0 {
l.Debug().Msgf("Fetching album from DB with id %d", opts.ID)
row, err := d.q.GetRelease(ctx, opts.ID)
if err != nil {
return nil, fmt.Errorf("GetAlbum: %w", err)
}
ret.ID = row.ID
ret.MbzID = row.MusicBrainzID
ret.Title = row.Title
ret.Image = row.Image
ret.VariousArtists = row.VariousArtists
err = json.Unmarshal(row.Artists, &ret.Artists)
if err != nil {
return nil, fmt.Errorf("GetAlbum: json.Unmarshal: %w", err)
}
} else if opts.MusicBrainzID != uuid.Nil {
if opts.MusicBrainzID != uuid.Nil {
l.Debug().Msgf("Fetching album from DB with MusicBrainz Release ID %s", opts.MusicBrainzID)
row, err := d.q.GetReleaseByMbzID(ctx, &opts.MusicBrainzID)
if err != nil {
return nil, fmt.Errorf("GetAlbum: %w", err)
}
ret.ID = row.ID
ret.MbzID = row.MusicBrainzID
ret.Title = row.Title
ret.Image = row.Image
ret.VariousArtists = row.VariousArtists
opts.ID = row.ID
} else if opts.ArtistID != 0 && opts.Title != "" {
l.Debug().Msgf("Fetching album from DB with artist_id %d and title %s", opts.ArtistID, opts.Title)
row, err := d.q.GetReleaseByArtistAndTitle(ctx, repository.GetReleaseByArtistAndTitleParams{
@ -58,11 +39,7 @@ func (d *Psql) GetAlbum(ctx context.Context, opts db.GetAlbumOpts) (*models.Albu
if err != nil {
return nil, fmt.Errorf("GetAlbum: %w", err)
}
ret.ID = row.ID
ret.MbzID = row.MusicBrainzID
ret.Title = row.Title
ret.Image = row.Image
ret.VariousArtists = row.VariousArtists
opts.ID = row.ID
} else if opts.ArtistID != 0 && len(opts.Titles) > 0 {
l.Debug().Msgf("Fetching release group from DB with artist_id %d and titles %v", opts.ArtistID, opts.Titles)
row, err := d.q.GetReleaseByArtistAndTitles(ctx, repository.GetReleaseByArtistAndTitlesParams{
@ -72,19 +49,19 @@ func (d *Psql) GetAlbum(ctx context.Context, opts db.GetAlbumOpts) (*models.Albu
if err != nil {
return nil, fmt.Errorf("GetAlbum: %w", err)
}
ret.ID = row.ID
ret.MbzID = row.MusicBrainzID
ret.Title = row.Title
ret.Image = row.Image
ret.VariousArtists = row.VariousArtists
} else {
return nil, errors.New("GetAlbum: insufficient information to get album")
opts.ID = row.ID
}
l.Debug().Msgf("Fetching album from DB with id %d", opts.ID)
row, err := d.q.GetRelease(ctx, opts.ID)
if err != nil {
return nil, fmt.Errorf("GetAlbum: %w", err)
}
count, err := d.q.CountListensFromRelease(ctx, repository.CountListensFromReleaseParams{
ListenedAt: time.Unix(0, 0),
ListenedAt_2: time.Now(),
ReleaseID: ret.ID,
ReleaseID: opts.ID,
})
if err != nil {
return nil, fmt.Errorf("GetAlbum: CountListensFromRelease: %w", err)
@ -92,17 +69,32 @@ func (d *Psql) GetAlbum(ctx context.Context, opts db.GetAlbumOpts) (*models.Albu
seconds, err := d.CountTimeListenedToItem(ctx, db.TimeListenedOpts{
Timeframe: db.Timeframe{Period: db.PeriodAllTime},
AlbumID: ret.ID,
AlbumID: opts.ID,
})
if err != nil {
return nil, fmt.Errorf("GetAlbum: CountTimeListenedToItem: %w", err)
}
firstListen, err := d.q.GetFirstListenFromRelease(ctx, ret.ID)
firstListen, err := d.q.GetFirstListenFromRelease(ctx, opts.ID)
if err != nil && !errors.Is(err, pgx.ErrNoRows) {
return nil, fmt.Errorf("GetAlbum: GetFirstListenFromRelease: %w", err)
}
rank, err := d.q.GetReleaseAllTimeRank(ctx, opts.ID)
if err != nil && !errors.Is(err, pgx.ErrNoRows) {
return nil, fmt.Errorf("GetAlbum: GetReleaseAllTimeRank: %w", err)
}
ret.ID = row.ID
ret.MbzID = row.MusicBrainzID
ret.Title = row.Title
ret.Image = row.Image
ret.VariousArtists = row.VariousArtists
err = json.Unmarshal(row.Artists, &ret.Artists)
if err != nil {
return nil, fmt.Errorf("GetAlbum: json.Unmarshal: %w", err)
}
ret.AllTimeRank = rank.Rank
ret.ListenCount = count
ret.TimeListened = seconds
ret.FirstListen = firstListen.ListenedAt.Unix()
@ -282,6 +274,9 @@ func (d *Psql) UpdateAlbum(ctx context.Context, opts db.UpdateAlbumOpts) error {
}
}
if opts.Image != uuid.Nil {
if opts.ImageSrc == "" {
return fmt.Errorf("UpdateAlbum: image source must be provided when updating an image")
}
l.Debug().Msgf("Updating release with ID %d with image %s", opts.ID, opts.Image)
err := qtx.UpdateReleaseImage(ctx, repository.UpdateReleaseImageParams{
ID: opts.ID,

View file

@ -20,114 +20,60 @@ import (
// this function sucks because sqlc keeps making new types for rows that are the same
func (d *Psql) GetArtist(ctx context.Context, opts db.GetArtistOpts) (*models.Artist, error) {
l := logger.FromContext(ctx)
if opts.ID != 0 {
l.Debug().Msgf("Fetching artist from DB with id %d", opts.ID)
row, err := d.q.GetArtist(ctx, opts.ID)
if err != nil {
return nil, fmt.Errorf("GetArtist: GetArtist by ID: %w", err)
}
count, err := d.q.CountListensFromArtist(ctx, repository.CountListensFromArtistParams{
ListenedAt: time.Unix(0, 0),
ListenedAt_2: time.Now(),
ArtistID: row.ID,
})
if err != nil {
return nil, fmt.Errorf("GetArtist: CountListensFromArtist: %w", err)
}
seconds, err := d.CountTimeListenedToItem(ctx, db.TimeListenedOpts{
Timeframe: db.Timeframe{Period: db.PeriodAllTime},
ArtistID: row.ID,
})
if err != nil {
return nil, fmt.Errorf("GetArtist: CountTimeListenedToItem: %w", err)
}
firstListen, err := d.q.GetFirstListenFromArtist(ctx, row.ID)
if err != nil && !errors.Is(err, pgx.ErrNoRows) {
return nil, fmt.Errorf("GetAlbum: GetFirstListenFromArtist: %w", err)
}
return &models.Artist{
ID: row.ID,
MbzID: row.MusicBrainzID,
Name: row.Name,
Aliases: row.Aliases,
Image: row.Image,
ListenCount: count,
TimeListened: seconds,
FirstListen: firstListen.ListenedAt.Unix(),
}, nil
} else if opts.MusicBrainzID != uuid.Nil {
if opts.MusicBrainzID != uuid.Nil {
l.Debug().Msgf("Fetching artist from DB with MusicBrainz ID %s", opts.MusicBrainzID)
row, err := d.q.GetArtistByMbzID(ctx, &opts.MusicBrainzID)
if err != nil {
return nil, fmt.Errorf("GetArtist: GetArtistByMbzID: %w", err)
}
count, err := d.q.CountListensFromArtist(ctx, repository.CountListensFromArtistParams{
ListenedAt: time.Unix(0, 0),
ListenedAt_2: time.Now(),
ArtistID: row.ID,
})
if err != nil {
return nil, fmt.Errorf("GetArtist: CountListensFromArtist: %w", err)
}
seconds, err := d.CountTimeListenedToItem(ctx, db.TimeListenedOpts{
Timeframe: db.Timeframe{Period: db.PeriodAllTime},
ArtistID: row.ID,
})
if err != nil {
return nil, fmt.Errorf("GetArtist: CountTimeListenedToItem: %w", err)
}
firstListen, err := d.q.GetFirstListenFromArtist(ctx, row.ID)
if err != nil && !errors.Is(err, pgx.ErrNoRows) {
return nil, fmt.Errorf("GetAlbum: GetFirstListenFromArtist: %w", err)
}
return &models.Artist{
ID: row.ID,
MbzID: row.MusicBrainzID,
Name: row.Name,
Aliases: row.Aliases,
Image: row.Image,
ListenCount: count,
TimeListened: seconds,
FirstListen: firstListen.ListenedAt.Unix(),
}, nil
opts.ID = row.ID
} else if opts.Name != "" {
l.Debug().Msgf("Fetching artist from DB with name '%s'", opts.Name)
row, err := d.q.GetArtistByName(ctx, opts.Name)
if err != nil {
return nil, fmt.Errorf("GetArtist: GetArtistByName: %w", err)
}
count, err := d.q.CountListensFromArtist(ctx, repository.CountListensFromArtistParams{
ListenedAt: time.Unix(0, 0),
ListenedAt_2: time.Now(),
ArtistID: row.ID,
})
if err != nil {
return nil, fmt.Errorf("GetArtist: CountListensFromArtist: %w", err)
}
seconds, err := d.CountTimeListenedToItem(ctx, db.TimeListenedOpts{
Timeframe: db.Timeframe{Period: db.PeriodAllTime},
ArtistID: row.ID,
})
if err != nil {
return nil, fmt.Errorf("GetArtist: CountTimeListenedToItem: %w", err)
}
firstListen, err := d.q.GetFirstListenFromArtist(ctx, row.ID)
if err != nil && !errors.Is(err, pgx.ErrNoRows) {
return nil, fmt.Errorf("GetAlbum: GetFirstListenFromArtist: %w", err)
}
return &models.Artist{
ID: row.ID,
MbzID: row.MusicBrainzID,
Name: row.Name,
Aliases: row.Aliases,
Image: row.Image,
ListenCount: count,
TimeListened: seconds,
FirstListen: firstListen.ListenedAt.Unix(),
}, nil
} else {
return nil, errors.New("insufficient information to get artist")
opts.ID = row.ID
}
l.Debug().Msgf("Fetching artist from DB with id %d", opts.ID)
row, err := d.q.GetArtist(ctx, opts.ID)
if err != nil {
return nil, fmt.Errorf("GetArtist: GetArtist by ID: %w", err)
}
count, err := d.q.CountListensFromArtist(ctx, repository.CountListensFromArtistParams{
ListenedAt: time.Unix(0, 0),
ListenedAt_2: time.Now(),
ArtistID: row.ID,
})
if err != nil {
return nil, fmt.Errorf("GetArtist: CountListensFromArtist: %w", err)
}
seconds, err := d.CountTimeListenedToItem(ctx, db.TimeListenedOpts{
Timeframe: db.Timeframe{Period: db.PeriodAllTime},
ArtistID: row.ID,
})
if err != nil {
return nil, fmt.Errorf("GetArtist: CountTimeListenedToItem: %w", err)
}
firstListen, err := d.q.GetFirstListenFromArtist(ctx, row.ID)
if err != nil && !errors.Is(err, pgx.ErrNoRows) {
return nil, fmt.Errorf("GetAlbum: GetFirstListenFromArtist: %w", err)
}
rank, err := d.q.GetArtistAllTimeRank(ctx, opts.ID)
if err != nil && !errors.Is(err, pgx.ErrNoRows) {
return nil, fmt.Errorf("GetArtist: GetArtistAllTimeRank: %w", err)
}
return &models.Artist{
ID: row.ID,
MbzID: row.MusicBrainzID,
Name: row.Name,
Aliases: row.Aliases,
Image: row.Image,
ListenCount: count,
TimeListened: seconds,
AllTimeRank: rank.Rank,
FirstListen: firstListen.ListenedAt.Unix(),
}, nil
}
// Inserts all unique aliases into the DB with specified source
@ -264,6 +210,9 @@ func (d *Psql) UpdateArtist(ctx context.Context, opts db.UpdateArtistOpts) error
}
}
if opts.Image != uuid.Nil {
if opts.ImageSrc == "" {
return fmt.Errorf("UpdateAlbum: image source must be provided when updating an image")
}
l.Debug().Msgf("Updating artist with id %d with image %s", opts.ID, opts.Image)
err = qtx.UpdateArtistImage(ctx, repository.UpdateArtistImageParams{
ID: opts.ID,

View file

@ -72,3 +72,26 @@ func (d *Psql) AlbumsWithoutImages(ctx context.Context, from int32) ([]*models.A
}
return albums, nil
}
// returns nil, nil on no results
func (d *Psql) ArtistsWithoutImages(ctx context.Context, from int32) ([]*models.Artist, error) {
rows, err := d.q.GetArtistsWithoutImages(ctx, repository.GetArtistsWithoutImagesParams{
Limit: 20,
ID: from,
})
if errors.Is(err, pgx.ErrNoRows) {
return nil, nil
} else if err != nil {
return nil, fmt.Errorf("ArtistsWithoutImages: %w", err)
}
ret := make([]*models.Artist, len(rows))
for i, row := range rows {
ret[i] = &models.Artist{
ID: row.ID,
Name: row.Name,
MbzID: row.MusicBrainzID,
}
}
return ret, nil
}

View file

@ -14,54 +14,54 @@ func (d *Psql) GetInterest(ctx context.Context, opts db.GetInterestOpts) ([]db.I
return nil, errors.New("GetInterest: bucket count must be provided")
}
ret := make([]db.InterestBucket, opts.Buckets)
ret := make([]db.InterestBucket, 0)
if opts.ArtistID != 0 {
resp, err := d.q.GetGroupedListensFromArtist(ctx, repository.GetGroupedListensFromArtistParams{
ArtistID: opts.ArtistID,
BucketCount: opts.Buckets,
BucketCount: int32(opts.Buckets),
})
if err != nil {
return nil, fmt.Errorf("GetInterest: GetGroupedListensFromArtist: %w", err)
}
for i, v := range resp {
ret[i] = db.InterestBucket{
for _, v := range resp {
ret = append(ret, db.InterestBucket{
BucketStart: v.BucketStart,
BucketEnd: v.BucketEnd,
ListenCount: v.ListenCount,
}
})
}
return ret, nil
} else if opts.AlbumID != 0 {
resp, err := d.q.GetGroupedListensFromRelease(ctx, repository.GetGroupedListensFromReleaseParams{
ReleaseID: opts.AlbumID,
BucketCount: opts.Buckets,
BucketCount: int32(opts.Buckets),
})
if err != nil {
return nil, fmt.Errorf("GetInterest: GetGroupedListensFromRelease: %w", err)
}
for i, v := range resp {
ret[i] = db.InterestBucket{
for _, v := range resp {
ret = append(ret, db.InterestBucket{
BucketStart: v.BucketStart,
BucketEnd: v.BucketEnd,
ListenCount: v.ListenCount,
}
})
}
return ret, nil
} else if opts.TrackID != 0 {
resp, err := d.q.GetGroupedListensFromTrack(ctx, repository.GetGroupedListensFromTrackParams{
ID: opts.TrackID,
BucketCount: opts.Buckets,
BucketCount: int32(opts.Buckets),
})
if err != nil {
return nil, fmt.Errorf("GetInterest: GetGroupedListensFromTrack: %w", err)
}
for i, v := range resp {
ret[i] = db.InterestBucket{
for _, v := range resp {
ret = append(ret, db.InterestBucket{
BucketStart: v.BucketStart,
BucketEnd: v.BucketEnd,
ListenCount: v.ListenCount,
}
})
}
return ret, nil
} else {

View file

@ -23,7 +23,7 @@ func (d *Psql) GetListenActivity(ctx context.Context, opts db.ListenActivityOpts
var listenActivity []db.ListenActivityItem
if opts.AlbumID > 0 {
l.Debug().Msgf("Fetching listen activity for %d %s(s) from %v to %v for release group %d",
opts.Range, opts.Step, t1.Format("Jan 02, 2006 15:04:05"), t2.Format("Jan 02, 2006 15:04:05"), opts.AlbumID)
opts.Range, opts.Step, t1.Format("Jan 02, 2006 15:04:05 MST"), t2.Format("Jan 02, 2006 15:04:05 MST"), opts.AlbumID)
rows, err := d.q.ListenActivityForRelease(ctx, repository.ListenActivityForReleaseParams{
Column1: opts.Timezone.String(),
ListenedAt: t1,
@ -44,7 +44,7 @@ func (d *Psql) GetListenActivity(ctx context.Context, opts db.ListenActivityOpts
l.Debug().Msgf("Database responded with %d steps", len(rows))
} else if opts.ArtistID > 0 {
l.Debug().Msgf("Fetching listen activity for %d %s(s) from %v to %v for artist %d",
opts.Range, opts.Step, t1.Format("Jan 02, 2006 15:04:05"), t2.Format("Jan 02, 2006 15:04:05"), opts.ArtistID)
opts.Range, opts.Step, t1.Format("Jan 02, 2006 15:04:05 MST"), t2.Format("Jan 02, 2006 15:04:05 MST"), opts.ArtistID)
rows, err := d.q.ListenActivityForArtist(ctx, repository.ListenActivityForArtistParams{
Column1: opts.Timezone.String(),
ListenedAt: t1,
@ -65,7 +65,7 @@ func (d *Psql) GetListenActivity(ctx context.Context, opts db.ListenActivityOpts
l.Debug().Msgf("Database responded with %d steps", len(rows))
} else if opts.TrackID > 0 {
l.Debug().Msgf("Fetching listen activity for %d %s(s) from %v to %v for track %d",
opts.Range, opts.Step, t1.Format("Jan 02, 2006 15:04:05"), t2.Format("Jan 02, 2006 15:04:05"), opts.TrackID)
opts.Range, opts.Step, t1.Format("Jan 02, 2006 15:04:05 MST"), t2.Format("Jan 02, 2006 15:04:05 MST"), opts.TrackID)
rows, err := d.q.ListenActivityForTrack(ctx, repository.ListenActivityForTrackParams{
Column1: opts.Timezone.String(),
ListenedAt: t1,
@ -86,7 +86,7 @@ func (d *Psql) GetListenActivity(ctx context.Context, opts db.ListenActivityOpts
l.Debug().Msgf("Database responded with %d steps", len(rows))
} else {
l.Debug().Msgf("Fetching listen activity for %d %s(s) from %v to %v",
opts.Range, opts.Step, t1.Format("Jan 02, 2006 15:04:05"), t2.Format("Jan 02, 2006 15:04:05"))
opts.Range, opts.Step, t1.Format("Jan 02, 2006 15:04:05 MST"), t2.Format("Jan 02, 2006 15:04:05 MST"))
rows, err := d.q.ListenActivity(ctx, repository.ListenActivityParams{
Column1: opts.Timezone.String(),
ListenedAt: t1,

View file

@ -97,20 +97,19 @@ func TestListenActivity(t *testing.T) {
err = store.Exec(context.Background(),
`INSERT INTO listens (user_id, track_id, listened_at)
VALUES (1, 1, NOW() - INTERVAL '1 month'),
(1, 1, NOW() - INTERVAL '2 months'),
(1, 1, NOW() - INTERVAL '3 months'),
(1, 2, NOW() - INTERVAL '1 month'),
(1, 2, NOW() - INTERVAL '2 months')`)
VALUES (1, 1, NOW() - INTERVAL '1 month 1 day'),
(1, 1, NOW() - INTERVAL '2 months 1 day'),
(1, 1, NOW() - INTERVAL '3 months 1 day'),
(1, 2, NOW() - INTERVAL '1 month 1 day'),
(1, 2, NOW() - INTERVAL '1 second'),
(1, 2, NOW() - INTERVAL '2 seconds'),
(1, 2, NOW() - INTERVAL '2 months 1 day')`)
require.NoError(t, err)
// This test is bad, and I think it's because of daylight savings.
// I need to find a better test.
activity, err = store.GetListenActivity(ctx, db.ListenActivityOpts{Step: db.StepMonth, Range: 8})
require.NoError(t, err)
// require.Len(t, activity, 8)
// assert.Equal(t, []int64{0, 0, 0, 0, 1, 2, 2, 0}, flattenListenCounts(activity))
require.Len(t, activity, 4)
assert.Equal(t, []int64{1, 2, 2, 2}, flattenListenCounts(activity))
// Truncate listens table and insert specific dates for testing opts.Step = db.StepYear
err = store.Exec(context.Background(), `TRUNCATE TABLE listens RESTART IDENTITY`)

View file

@ -52,7 +52,7 @@ func (d *Psql) MergeTracks(ctx context.Context, fromId, toId int32) error {
}
err = qtx.CleanOrphanedEntries(ctx)
if err != nil {
l.Err(err).Msg("Failed to clean orphaned entries")
l.Err(err).Msg("MergeTracks: Failed to clean orphaned entries")
return err
}
return tx.Commit(ctx)

View file

@ -12,27 +12,27 @@ func setupTestDataForMerge(t *testing.T) {
truncateTestData(t)
// Insert artists
err := store.Exec(context.Background(),
`INSERT INTO artists (musicbrainz_id, image, image_source)
`INSERT INTO artists (musicbrainz_id, image, image_source)
VALUES ('00000000-0000-0000-0000-000000000001', '10000000-0000-0000-0000-000000000000', 'source.com'),
('00000000-0000-0000-0000-000000000002', NULL, NULL)`)
require.NoError(t, err)
err = store.Exec(context.Background(),
`INSERT INTO artist_aliases (artist_id, alias, source, is_primary)
`INSERT INTO artist_aliases (artist_id, alias, source, is_primary)
VALUES (1, 'Artist One', 'Testing', true),
(2, 'Artist Two', 'Testing', true)`)
require.NoError(t, err)
// Insert albums
err = store.Exec(context.Background(),
`INSERT INTO releases (musicbrainz_id, image, image_source)
`INSERT INTO releases (musicbrainz_id, image, image_source)
VALUES ('11111111-1111-1111-1111-111111111111', '20000000-0000-0000-0000-000000000000', 'source.com'),
('22222222-2222-2222-2222-222222222222', NULL, NULL),
(NULL, NULL, NULL)`)
require.NoError(t, err)
err = store.Exec(context.Background(),
`INSERT INTO release_aliases (release_id, alias, source, is_primary)
`INSERT INTO release_aliases (release_id, alias, source, is_primary)
VALUES (1, 'Album One', 'Testing', true),
(2, 'Album Two', 'Testing', true),
(3, 'Album Three', 'Testing', true)`)
@ -40,7 +40,7 @@ func setupTestDataForMerge(t *testing.T) {
// Insert tracks
err = store.Exec(context.Background(),
`INSERT INTO tracks (musicbrainz_id, release_id)
`INSERT INTO tracks (musicbrainz_id, release_id)
VALUES ('33333333-3333-3333-3333-333333333333', 1),
('44444444-4444-4444-4444-444444444444', 2),
('55555555-5555-5555-5555-555555555555', 1),
@ -48,7 +48,7 @@ func setupTestDataForMerge(t *testing.T) {
require.NoError(t, err)
err = store.Exec(context.Background(),
`INSERT INTO track_aliases (track_id, alias, source, is_primary)
`INSERT INTO track_aliases (track_id, alias, source, is_primary)
VALUES (1, 'Track One', 'Testing', true),
(2, 'Track Two', 'Testing', true),
(3, 'Track Three', 'Testing', true),
@ -57,18 +57,18 @@ func setupTestDataForMerge(t *testing.T) {
// Associate artists with albums and tracks
err = store.Exec(context.Background(),
`INSERT INTO artist_releases (artist_id, release_id)
`INSERT INTO artist_releases (artist_id, release_id)
VALUES (1, 1), (2, 2), (1, 3)`)
require.NoError(t, err)
err = store.Exec(context.Background(),
`INSERT INTO artist_tracks (artist_id, track_id)
`INSERT INTO artist_tracks (artist_id, track_id)
VALUES (1, 1), (2, 2), (1, 3), (1, 4)`)
require.NoError(t, err)
// Insert listens
err = store.Exec(context.Background(),
`INSERT INTO listens (user_id, track_id, listened_at)
`INSERT INTO listens (user_id, track_id, listened_at)
VALUES (1, 1, NOW() - INTERVAL '1 day'),
(1, 2, NOW() - INTERVAL '2 days'),
(1, 3, NOW() - INTERVAL '3 days'),
@ -90,14 +90,14 @@ func TestMergeTracks(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, 2, count, "expected all listens to be merged into Track 2")
// Verify artist is associated with album
// Verify old artist is not associated with album
exists, err := store.RowExists(ctx, `
SELECT EXISTS (
SELECT 1 FROM artist_releases
WHERE release_id = $1 AND artist_id = $2
)`, 2, 1)
require.NoError(t, err)
assert.True(t, exists, "expected old artist to be associated with album")
assert.False(t, exists)
truncateTestData(t)
}

View file

@ -11,7 +11,7 @@ import (
"github.com/gabehf/koito/internal/repository"
)
func (d *Psql) GetTopAlbumsPaginated(ctx context.Context, opts db.GetItemsOpts) (*db.PaginatedResponse[*models.Album], error) {
func (d *Psql) GetTopAlbumsPaginated(ctx context.Context, opts db.GetItemsOpts) (*db.PaginatedResponse[db.RankedItem[*models.Album]], error) {
l := logger.FromContext(ctx)
offset := (opts.Page - 1) * opts.Limit
t1, t2 := db.TimeframeToTimeRange(opts.Timeframe)
@ -19,7 +19,7 @@ func (d *Psql) GetTopAlbumsPaginated(ctx context.Context, opts db.GetItemsOpts)
opts.Limit = DefaultItemsPerPage
}
var rgs []*models.Album
var rgs []db.RankedItem[*models.Album]
var count int64
if opts.ArtistID != 0 {
@ -36,7 +36,7 @@ func (d *Psql) GetTopAlbumsPaginated(ctx context.Context, opts db.GetItemsOpts)
if err != nil {
return nil, fmt.Errorf("GetTopAlbumsPaginated: GetTopReleasesFromArtist: %w", err)
}
rgs = make([]*models.Album, len(rows))
rgs = make([]db.RankedItem[*models.Album], len(rows))
l.Debug().Msgf("Database responded with %d items", len(rows))
for i, v := range rows {
artists := make([]models.SimpleArtist, 0)
@ -45,7 +45,7 @@ func (d *Psql) GetTopAlbumsPaginated(ctx context.Context, opts db.GetItemsOpts)
l.Err(err).Msgf("Error unmarshalling artists for release group with id %d", v.ID)
return nil, fmt.Errorf("GetTopAlbumsPaginated: Unmarshal: %w", err)
}
rgs[i] = &models.Album{
rgs[i].Item = &models.Album{
ID: v.ID,
MbzID: v.MusicBrainzID,
Title: v.Title,
@ -54,6 +54,7 @@ func (d *Psql) GetTopAlbumsPaginated(ctx context.Context, opts db.GetItemsOpts)
VariousArtists: v.VariousArtists,
ListenCount: v.ListenCount,
}
rgs[i].Rank = v.Rank
}
count, err = d.q.CountReleasesFromArtist(ctx, int32(opts.ArtistID))
if err != nil {
@ -71,7 +72,7 @@ func (d *Psql) GetTopAlbumsPaginated(ctx context.Context, opts db.GetItemsOpts)
if err != nil {
return nil, fmt.Errorf("GetTopAlbumsPaginated: GetTopReleasesPaginated: %w", err)
}
rgs = make([]*models.Album, len(rows))
rgs = make([]db.RankedItem[*models.Album], len(rows))
l.Debug().Msgf("Database responded with %d items", len(rows))
for i, row := range rows {
artists := make([]models.SimpleArtist, 0)
@ -80,16 +81,16 @@ func (d *Psql) GetTopAlbumsPaginated(ctx context.Context, opts db.GetItemsOpts)
l.Err(err).Msgf("Error unmarshalling artists for release group with id %d", row.ID)
return nil, fmt.Errorf("GetTopAlbumsPaginated: Unmarshal: %w", err)
}
t := &models.Album{
Title: row.Title,
MbzID: row.MusicBrainzID,
rgs[i].Item = &models.Album{
ID: row.ID,
MbzID: row.MusicBrainzID,
Title: row.Title,
Image: row.Image,
Artists: artists,
VariousArtists: row.VariousArtists,
ListenCount: row.ListenCount,
}
rgs[i] = t
rgs[i].Rank = row.Rank
}
count, err = d.q.CountTopReleases(ctx, repository.CountTopReleasesParams{
ListenedAt: t1,
@ -100,7 +101,7 @@ func (d *Psql) GetTopAlbumsPaginated(ctx context.Context, opts db.GetItemsOpts)
}
l.Debug().Msgf("Database responded with %d albums out of a total %d", len(rows), count)
}
return &db.PaginatedResponse[*models.Album]{
return &db.PaginatedResponse[db.RankedItem[*models.Album]]{
Items: rgs,
TotalCount: count,
ItemsPerPage: int32(opts.Limit),

View file

@ -18,16 +18,16 @@ func TestGetTopAlbumsPaginated(t *testing.T) {
require.NoError(t, err)
require.Len(t, resp.Items, 4)
assert.Equal(t, int64(4), resp.TotalCount)
assert.Equal(t, "Release One", resp.Items[0].Title)
assert.Equal(t, "Release Two", resp.Items[1].Title)
assert.Equal(t, "Release Three", resp.Items[2].Title)
assert.Equal(t, "Release Four", resp.Items[3].Title)
assert.Equal(t, "Release One", resp.Items[0].Item.Title)
assert.Equal(t, "Release Two", resp.Items[1].Item.Title)
assert.Equal(t, "Release Three", resp.Items[2].Item.Title)
assert.Equal(t, "Release Four", resp.Items[3].Item.Title)
// Test pagination
resp, err = store.GetTopAlbumsPaginated(ctx, db.GetItemsOpts{Limit: 1, Page: 2, Timeframe: db.Timeframe{Period: db.PeriodAllTime}})
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, "Release Two", resp.Items[0].Title)
assert.Equal(t, "Release Two", resp.Items[0].Item.Title)
// Test page out of range
resp, err = store.GetTopAlbumsPaginated(ctx, db.GetItemsOpts{Limit: 1, Page: 10, Timeframe: db.Timeframe{Period: db.PeriodAllTime}})
@ -57,29 +57,29 @@ func TestGetTopAlbumsPaginated(t *testing.T) {
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Release Four", resp.Items[0].Title)
assert.Equal(t, "Release Four", resp.Items[0].Item.Title)
resp, err = store.GetTopAlbumsPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Period: db.PeriodMonth}})
require.NoError(t, err)
require.Len(t, resp.Items, 2)
assert.Equal(t, int64(2), resp.TotalCount)
assert.Equal(t, "Release Three", resp.Items[0].Title)
assert.Equal(t, "Release Four", resp.Items[1].Title)
assert.Equal(t, "Release Three", resp.Items[0].Item.Title)
assert.Equal(t, "Release Four", resp.Items[1].Item.Title)
resp, err = store.GetTopAlbumsPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Period: db.PeriodYear}})
require.NoError(t, err)
require.Len(t, resp.Items, 3)
assert.Equal(t, int64(3), resp.TotalCount)
assert.Equal(t, "Release Two", resp.Items[0].Title)
assert.Equal(t, "Release Three", resp.Items[1].Title)
assert.Equal(t, "Release Four", resp.Items[2].Title)
assert.Equal(t, "Release Two", resp.Items[0].Item.Title)
assert.Equal(t, "Release Three", resp.Items[1].Item.Title)
assert.Equal(t, "Release Four", resp.Items[2].Item.Title)
// test specific artist
resp, err = store.GetTopAlbumsPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Period: db.PeriodYear}, ArtistID: 2})
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Release Two", resp.Items[0].Title)
assert.Equal(t, "Release Two", resp.Items[0].Item.Title)
// Test specify dates
@ -89,11 +89,11 @@ func TestGetTopAlbumsPaginated(t *testing.T) {
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Release One", resp.Items[0].Title)
assert.Equal(t, "Release One", resp.Items[0].Item.Title)
resp, err = store.GetTopAlbumsPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Month: 6, Year: 2024}})
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Release Two", resp.Items[0].Title)
assert.Equal(t, "Release Two", resp.Items[0].Item.Title)
}

View file

@ -10,7 +10,7 @@ import (
"github.com/gabehf/koito/internal/repository"
)
func (d *Psql) GetTopArtistsPaginated(ctx context.Context, opts db.GetItemsOpts) (*db.PaginatedResponse[*models.Artist], error) {
func (d *Psql) GetTopArtistsPaginated(ctx context.Context, opts db.GetItemsOpts) (*db.PaginatedResponse[db.RankedItem[*models.Artist]], error) {
l := logger.FromContext(ctx)
offset := (opts.Page - 1) * opts.Limit
t1, t2 := db.TimeframeToTimeRange(opts.Timeframe)
@ -28,7 +28,7 @@ func (d *Psql) GetTopArtistsPaginated(ctx context.Context, opts db.GetItemsOpts)
if err != nil {
return nil, fmt.Errorf("GetTopArtistsPaginated: GetTopArtistsPaginated: %w", err)
}
rgs := make([]*models.Artist, len(rows))
rgs := make([]db.RankedItem[*models.Artist], len(rows))
for i, row := range rows {
t := &models.Artist{
Name: row.Name,
@ -37,7 +37,8 @@ func (d *Psql) GetTopArtistsPaginated(ctx context.Context, opts db.GetItemsOpts)
Image: row.Image,
ListenCount: row.ListenCount,
}
rgs[i] = t
rgs[i].Item = t
rgs[i].Rank = row.Rank
}
count, err := d.q.CountTopArtists(ctx, repository.CountTopArtistsParams{
ListenedAt: t1,
@ -48,7 +49,7 @@ func (d *Psql) GetTopArtistsPaginated(ctx context.Context, opts db.GetItemsOpts)
}
l.Debug().Msgf("Database responded with %d artists out of a total %d", len(rows), count)
return &db.PaginatedResponse[*models.Artist]{
return &db.PaginatedResponse[db.RankedItem[*models.Artist]]{
Items: rgs,
TotalCount: count,
ItemsPerPage: int32(opts.Limit),

View file

@ -18,16 +18,16 @@ func TestGetTopArtistsPaginated(t *testing.T) {
require.NoError(t, err)
require.Len(t, resp.Items, 4)
assert.Equal(t, int64(4), resp.TotalCount)
assert.Equal(t, "Artist One", resp.Items[0].Name)
assert.Equal(t, "Artist Two", resp.Items[1].Name)
assert.Equal(t, "Artist Three", resp.Items[2].Name)
assert.Equal(t, "Artist Four", resp.Items[3].Name)
assert.Equal(t, "Artist One", resp.Items[0].Item.Name)
assert.Equal(t, "Artist Two", resp.Items[1].Item.Name)
assert.Equal(t, "Artist Three", resp.Items[2].Item.Name)
assert.Equal(t, "Artist Four", resp.Items[3].Item.Name)
// Test pagination
resp, err = store.GetTopArtistsPaginated(ctx, db.GetItemsOpts{Limit: 1, Page: 2, Timeframe: db.Timeframe{Period: db.PeriodAllTime}})
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, "Artist Two", resp.Items[0].Name)
assert.Equal(t, "Artist Two", resp.Items[0].Item.Name)
// Test page out of range
resp, err = store.GetTopArtistsPaginated(ctx, db.GetItemsOpts{Limit: 1, Page: 10, Timeframe: db.Timeframe{Period: db.PeriodAllTime}})
@ -57,22 +57,22 @@ func TestGetTopArtistsPaginated(t *testing.T) {
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Artist Four", resp.Items[0].Name)
assert.Equal(t, "Artist Four", resp.Items[0].Item.Name)
resp, err = store.GetTopArtistsPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Period: db.PeriodMonth}})
require.NoError(t, err)
require.Len(t, resp.Items, 2)
assert.Equal(t, int64(2), resp.TotalCount)
assert.Equal(t, "Artist Three", resp.Items[0].Name)
assert.Equal(t, "Artist Four", resp.Items[1].Name)
assert.Equal(t, "Artist Three", resp.Items[0].Item.Name)
assert.Equal(t, "Artist Four", resp.Items[1].Item.Name)
resp, err = store.GetTopArtistsPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Period: db.PeriodYear}})
require.NoError(t, err)
require.Len(t, resp.Items, 3)
assert.Equal(t, int64(3), resp.TotalCount)
assert.Equal(t, "Artist Two", resp.Items[0].Name)
assert.Equal(t, "Artist Three", resp.Items[1].Name)
assert.Equal(t, "Artist Four", resp.Items[2].Name)
assert.Equal(t, "Artist Two", resp.Items[0].Item.Name)
assert.Equal(t, "Artist Three", resp.Items[1].Item.Name)
assert.Equal(t, "Artist Four", resp.Items[2].Item.Name)
// Test specify dates
@ -82,11 +82,11 @@ func TestGetTopArtistsPaginated(t *testing.T) {
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Artist One", resp.Items[0].Name)
assert.Equal(t, "Artist One", resp.Items[0].Item.Name)
resp, err = store.GetTopArtistsPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Month: 6, Year: 2024}})
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Artist Two", resp.Items[0].Name)
assert.Equal(t, "Artist Two", resp.Items[0].Item.Name)
}

View file

@ -11,14 +11,14 @@ import (
"github.com/gabehf/koito/internal/repository"
)
func (d *Psql) GetTopTracksPaginated(ctx context.Context, opts db.GetItemsOpts) (*db.PaginatedResponse[*models.Track], error) {
func (d *Psql) GetTopTracksPaginated(ctx context.Context, opts db.GetItemsOpts) (*db.PaginatedResponse[db.RankedItem[*models.Track]], error) {
l := logger.FromContext(ctx)
offset := (opts.Page - 1) * opts.Limit
t1, t2 := db.TimeframeToTimeRange(opts.Timeframe)
if opts.Limit == 0 {
opts.Limit = DefaultItemsPerPage
}
var tracks []*models.Track
var tracks []db.RankedItem[*models.Track]
var count int64
if opts.AlbumID > 0 {
l.Debug().Msgf("Fetching top %d tracks on page %d from range %v to %v",
@ -33,7 +33,7 @@ func (d *Psql) GetTopTracksPaginated(ctx context.Context, opts db.GetItemsOpts)
if err != nil {
return nil, fmt.Errorf("GetTopTracksPaginated: GetTopTracksInReleasePaginated: %w", err)
}
tracks = make([]*models.Track, len(rows))
tracks = make([]db.RankedItem[*models.Track], len(rows))
for i, row := range rows {
artists := make([]models.SimpleArtist, 0)
err = json.Unmarshal(row.Artists, &artists)
@ -50,7 +50,8 @@ func (d *Psql) GetTopTracksPaginated(ctx context.Context, opts db.GetItemsOpts)
AlbumID: row.ReleaseID,
Artists: artists,
}
tracks[i] = t
tracks[i].Item = t
tracks[i].Rank = row.Rank
}
count, err = d.q.CountTopTracksByRelease(ctx, repository.CountTopTracksByReleaseParams{
ListenedAt: t1,
@ -73,7 +74,7 @@ func (d *Psql) GetTopTracksPaginated(ctx context.Context, opts db.GetItemsOpts)
if err != nil {
return nil, fmt.Errorf("GetTopTracksPaginated: GetTopTracksByArtistPaginated: %w", err)
}
tracks = make([]*models.Track, len(rows))
tracks = make([]db.RankedItem[*models.Track], len(rows))
for i, row := range rows {
artists := make([]models.SimpleArtist, 0)
err = json.Unmarshal(row.Artists, &artists)
@ -90,7 +91,8 @@ func (d *Psql) GetTopTracksPaginated(ctx context.Context, opts db.GetItemsOpts)
AlbumID: row.ReleaseID,
Artists: artists,
}
tracks[i] = t
tracks[i].Item = t
tracks[i].Rank = row.Rank
}
count, err = d.q.CountTopTracksByArtist(ctx, repository.CountTopTracksByArtistParams{
ListenedAt: t1,
@ -112,7 +114,7 @@ func (d *Psql) GetTopTracksPaginated(ctx context.Context, opts db.GetItemsOpts)
if err != nil {
return nil, fmt.Errorf("GetTopTracksPaginated: GetTopTracksPaginated: %w", err)
}
tracks = make([]*models.Track, len(rows))
tracks = make([]db.RankedItem[*models.Track], len(rows))
for i, row := range rows {
artists := make([]models.SimpleArtist, 0)
err = json.Unmarshal(row.Artists, &artists)
@ -129,7 +131,8 @@ func (d *Psql) GetTopTracksPaginated(ctx context.Context, opts db.GetItemsOpts)
AlbumID: row.ReleaseID,
Artists: artists,
}
tracks[i] = t
tracks[i].Item = t
tracks[i].Rank = row.Rank
}
count, err = d.q.CountTopTracks(ctx, repository.CountTopTracksParams{
ListenedAt: t1,
@ -141,7 +144,7 @@ func (d *Psql) GetTopTracksPaginated(ctx context.Context, opts db.GetItemsOpts)
l.Debug().Msgf("Database responded with %d tracks out of a total %d", len(rows), count)
}
return &db.PaginatedResponse[*models.Track]{
return &db.PaginatedResponse[db.RankedItem[*models.Track]]{
Items: tracks,
TotalCount: count,
ItemsPerPage: int32(opts.Limit),

View file

@ -18,19 +18,19 @@ func TestGetTopTracksPaginated(t *testing.T) {
require.NoError(t, err)
require.Len(t, resp.Items, 4)
assert.Equal(t, int64(4), resp.TotalCount)
assert.Equal(t, "Track One", resp.Items[0].Title)
assert.Equal(t, "Track Two", resp.Items[1].Title)
assert.Equal(t, "Track Three", resp.Items[2].Title)
assert.Equal(t, "Track Four", resp.Items[3].Title)
assert.Equal(t, "Track One", resp.Items[0].Item.Title)
assert.Equal(t, "Track Two", resp.Items[1].Item.Title)
assert.Equal(t, "Track Three", resp.Items[2].Item.Title)
assert.Equal(t, "Track Four", resp.Items[3].Item.Title)
// ensure artists are included
require.Len(t, resp.Items[0].Artists, 1)
assert.Equal(t, "Artist One", resp.Items[0].Artists[0].Name)
require.Len(t, resp.Items[0].Item.Artists, 1)
assert.Equal(t, "Artist One", resp.Items[0].Item.Artists[0].Name)
// Test pagination
resp, err = store.GetTopTracksPaginated(ctx, db.GetItemsOpts{Limit: 1, Page: 2, Timeframe: db.Timeframe{Period: db.PeriodAllTime}})
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, "Track Two", resp.Items[0].Title)
assert.Equal(t, "Track Two", resp.Items[0].Item.Title)
// Test page out of range
resp, err = store.GetTopTracksPaginated(ctx, db.GetItemsOpts{Limit: 1, Page: 10, Timeframe: db.Timeframe{Period: db.PeriodAllTime}})
@ -60,41 +60,41 @@ func TestGetTopTracksPaginated(t *testing.T) {
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Track Four", resp.Items[0].Title)
assert.Equal(t, "Track Four", resp.Items[0].Item.Title)
resp, err = store.GetTopTracksPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Period: db.PeriodMonth}})
require.NoError(t, err)
require.Len(t, resp.Items, 2)
assert.Equal(t, int64(2), resp.TotalCount)
assert.Equal(t, "Track Three", resp.Items[0].Title)
assert.Equal(t, "Track Four", resp.Items[1].Title)
assert.Equal(t, "Track Three", resp.Items[0].Item.Title)
assert.Equal(t, "Track Four", resp.Items[1].Item.Title)
resp, err = store.GetTopTracksPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Period: db.PeriodYear}})
require.NoError(t, err)
require.Len(t, resp.Items, 3)
assert.Equal(t, int64(3), resp.TotalCount)
assert.Equal(t, "Track Two", resp.Items[0].Title)
assert.Equal(t, "Track Three", resp.Items[1].Title)
assert.Equal(t, "Track Four", resp.Items[2].Title)
assert.Equal(t, "Track Two", resp.Items[0].Item.Title)
assert.Equal(t, "Track Three", resp.Items[1].Item.Title)
assert.Equal(t, "Track Four", resp.Items[2].Item.Title)
// Test filter by artists and releases
resp, err = store.GetTopTracksPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Period: db.PeriodAllTime}, ArtistID: 1})
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Track One", resp.Items[0].Title)
assert.Equal(t, "Track One", resp.Items[0].Item.Title)
resp, err = store.GetTopTracksPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Period: db.PeriodAllTime}, AlbumID: 2})
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Track Two", resp.Items[0].Title)
assert.Equal(t, "Track Two", resp.Items[0].Item.Title)
// when both artistID and albumID are specified, artist id is ignored
resp, err = store.GetTopTracksPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Period: db.PeriodAllTime}, AlbumID: 2, ArtistID: 1})
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Track Two", resp.Items[0].Title)
assert.Equal(t, "Track Two", resp.Items[0].Item.Title)
// Test specify dates
@ -104,11 +104,11 @@ func TestGetTopTracksPaginated(t *testing.T) {
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Track One", resp.Items[0].Title)
assert.Equal(t, "Track One", resp.Items[0].Item.Title)
resp, err = store.GetTopTracksPaginated(ctx, db.GetItemsOpts{Timeframe: db.Timeframe{Month: 6, Year: 2024}})
require.NoError(t, err)
require.Len(t, resp.Items, 1)
assert.Equal(t, int64(1), resp.TotalCount)
assert.Equal(t, "Track Two", resp.Items[0].Title)
assert.Equal(t, "Track Two", resp.Items[0].Item.Title)
}

View file

@ -21,37 +21,13 @@ func (d *Psql) GetTrack(ctx context.Context, opts db.GetTrackOpts) (*models.Trac
l := logger.FromContext(ctx)
var track models.Track
if opts.ID != 0 {
l.Debug().Msgf("Fetching track from DB with id %d", opts.ID)
t, err := d.q.GetTrack(ctx, opts.ID)
if err != nil {
return nil, fmt.Errorf("GetTrack: GetTrack By ID: %w", err)
}
track = models.Track{
ID: t.ID,
MbzID: t.MusicBrainzID,
Title: t.Title,
AlbumID: t.ReleaseID,
Image: t.Image,
Duration: t.Duration,
}
err = json.Unmarshal(t.Artists, &track.Artists)
if err != nil {
return nil, fmt.Errorf("GetTrack: json.Unmarshal: %w", err)
}
} else if opts.MusicBrainzID != uuid.Nil {
if opts.MusicBrainzID != uuid.Nil {
l.Debug().Msgf("Fetching track from DB with MusicBrainz ID %s", opts.MusicBrainzID)
t, err := d.q.GetTrackByMbzID(ctx, &opts.MusicBrainzID)
if err != nil {
return nil, fmt.Errorf("GetTrack: GetTrackByMbzID: %w", err)
}
track = models.Track{
ID: t.ID,
MbzID: t.MusicBrainzID,
Title: t.Title,
AlbumID: t.ReleaseID,
Duration: t.Duration,
}
opts.ID = t.ID
} else if len(opts.ArtistIDs) > 0 && opts.ReleaseID != 0 {
l.Debug().Msgf("Fetching track from DB from release id %d with title '%s' and artist id(s) '%v'", opts.ReleaseID, opts.Title, opts.ArtistIDs)
t, err := d.q.GetTrackByTrackInfo(ctx, repository.GetTrackByTrackInfoParams{
@ -62,21 +38,19 @@ func (d *Psql) GetTrack(ctx context.Context, opts db.GetTrackOpts) (*models.Trac
if err != nil {
return nil, fmt.Errorf("GetTrack: GetTrackByTrackInfo: %w", err)
}
track = models.Track{
ID: t.ID,
MbzID: t.MusicBrainzID,
Title: t.Title,
AlbumID: t.ReleaseID,
Duration: t.Duration,
}
} else {
return nil, errors.New("GetTrack: insufficient information to get track")
opts.ID = t.ID
}
l.Debug().Msgf("Fetching track from DB with id %d", opts.ID)
t, err := d.q.GetTrack(ctx, opts.ID)
if err != nil {
return nil, fmt.Errorf("GetTrack: GetTrack By ID: %w", err)
}
count, err := d.q.CountListensFromTrack(ctx, repository.CountListensFromTrackParams{
ListenedAt: time.Unix(0, 0),
ListenedAt_2: time.Now(),
TrackID: track.ID,
TrackID: opts.ID,
})
if err != nil {
return nil, fmt.Errorf("GetTrack: CountListensFromTrack: %w", err)
@ -84,20 +58,37 @@ func (d *Psql) GetTrack(ctx context.Context, opts db.GetTrackOpts) (*models.Trac
seconds, err := d.CountTimeListenedToItem(ctx, db.TimeListenedOpts{
Timeframe: db.Timeframe{Period: db.PeriodAllTime},
TrackID: track.ID,
TrackID: opts.ID,
})
if err != nil {
return nil, fmt.Errorf("GetTrack: CountTimeListenedToItem: %w", err)
}
firstListen, err := d.q.GetFirstListenFromTrack(ctx, track.ID)
firstListen, err := d.q.GetFirstListenFromTrack(ctx, opts.ID)
if err != nil && !errors.Is(err, pgx.ErrNoRows) {
return nil, fmt.Errorf("GetAlbum: GetFirstListenFromRelease: %w", err)
}
rank, err := d.q.GetTrackAllTimeRank(ctx, opts.ID)
if err != nil && !errors.Is(err, pgx.ErrNoRows) {
return nil, fmt.Errorf("GetAlbum: GetTrackAllTimeRank: %w", err)
}
track.ListenCount = count
track.TimeListened = seconds
track.FirstListen = firstListen.ListenedAt.Unix()
track = models.Track{
ID: t.ID,
MbzID: t.MusicBrainzID,
Title: t.Title,
AlbumID: t.ReleaseID,
Image: t.Image,
Duration: t.Duration,
AllTimeRank: rank.Rank,
ListenCount: count,
TimeListened: seconds,
FirstListen: firstListen.ListenedAt.Unix(),
}
err = json.Unmarshal(t.Artists, &track.Artists)
if err != nil {
return nil, fmt.Errorf("GetTrack: json.Unmarshal: %w", err)
}
return &track, nil
}
@ -146,6 +137,13 @@ func (d *Psql) SaveTrack(ctx context.Context, opts db.SaveTrackOpts) (*models.Tr
if err != nil {
return nil, fmt.Errorf("SaveTrack: AssociateArtistToTrack: %w", err)
}
err = qtx.AssociateArtistToRelease(ctx, repository.AssociateArtistToReleaseParams{
ArtistID: aid,
ReleaseID: trackRow.ReleaseID,
})
if err != nil {
return nil, fmt.Errorf("SaveTrack: AssociateArtistToTrack: %w", err)
}
}
// insert primary alias
err = qtx.InsertTrackAlias(ctx, repository.InsertTrackAliasParams{
@ -242,7 +240,28 @@ func (d *Psql) SaveTrackAliases(ctx context.Context, id int32, aliases []string,
}
func (d *Psql) DeleteTrack(ctx context.Context, id int32) error {
return d.q.DeleteTrack(ctx, id)
l := logger.FromContext(ctx)
tx, err := d.conn.BeginTx(ctx, pgx.TxOptions{})
if err != nil {
l.Err(err).Msg("Failed to begin transaction")
return fmt.Errorf("DeleteTrack: %w", err)
}
defer tx.Rollback(ctx)
qtx := d.q.WithTx(tx)
err = qtx.DeleteTrack(ctx, id)
if err != nil {
return fmt.Errorf("DeleteTrack: DeleteTrack: %w", err)
}
// also clean orphaned entries to ensure artists are disassociated with releases where
// they no longer have any tracks on the release
err = qtx.CleanOrphanedEntries(ctx)
if err != nil {
return fmt.Errorf("DeleteTrack: CleanOrphanedEntries: %w", err)
}
return tx.Commit(ctx)
}
func (d *Psql) DeleteTrackAlias(ctx context.Context, id int32, alias string) error {
@ -380,7 +399,7 @@ func (d *Psql) SetPrimaryTrackArtist(ctx context.Context, id int32, artistId int
func (d *Psql) GetTracksWithNoDurationButHaveMbzID(ctx context.Context, from int32) ([]*models.Track, error) {
results, err := d.q.GetTracksWithNoDurationButHaveMbzID(ctx, repository.GetTracksWithNoDurationButHaveMbzIDParams{
Limit: 20,
ID: 0,
ID: from,
})
if errors.Is(err, pgx.ErrNoRows) {
return nil, nil

View file

@ -62,7 +62,7 @@ func testDataForTracks(t *testing.T) {
VALUES (1, 1), (2, 2)`)
require.NoError(t, err)
// Associate tracks with artists
// Insert listens
err = store.Exec(context.Background(),
`INSERT INTO listens (user_id, track_id, listened_at)
VALUES (1, 1, NOW()), (1, 2, NOW())`)
@ -228,3 +228,27 @@ func TestDeleteTrack(t *testing.T) {
_, err = store.Count(ctx, `SELECT * FROM tracks WHERE id = 2`)
require.ErrorIs(t, err, pgx.ErrNoRows) // no rows error
}
func TestReleaseAssociations(t *testing.T) {
testDataForTracks(t)
ctx := context.Background()
track, err := store.SaveTrack(ctx, db.SaveTrackOpts{
Title: "Track Three",
AlbumID: 2,
ArtistIDs: []int32{2, 1}, // Artist Two feat. Artist One
Duration: 100,
})
require.NoError(t, err)
count, err := store.Count(ctx, `SELECT COUNT(*) FROM artist_releases WHERE release_id = 2`)
require.NoError(t, err)
require.Equal(t, 2, count, "expected release to be associated with artist from inserted track")
err = store.DeleteTrack(ctx, track.ID)
require.NoError(t, err)
count, err = store.Count(ctx, `SELECT COUNT(*) FROM artist_releases WHERE release_id = 2`)
require.NoError(t, err)
require.Equal(t, 1, count, "expected artist no longer on release to be disassociated from release")
}

View file

@ -28,6 +28,11 @@ type PaginatedResponse[T any] struct {
CurrentPage int32 `json:"current_page"`
}
type RankedItem[T any] struct {
Item T `json:"item"`
Rank int64 `json:"rank"`
}
type ExportItem struct {
ListenedAt time.Time
UserID int32

View file

@ -110,6 +110,9 @@ func (c *DeezerClient) getEntity(ctx context.Context, endpoint string, result an
return nil
}
// Deezer behavior is that it serves a default image when it can't find one for an artist, so
// this function will just download the default image thinking that it is an actual artist image.
// I don't know how to fix this yet.
func (c *DeezerClient) GetArtistImages(ctx context.Context, aliases []string) (string, error) {
l := logger.FromContext(ctx)
resp := new(DeezerArtistResponse)

View file

@ -5,6 +5,7 @@ import (
"context"
"fmt"
"net/http"
"strings"
"sync"
"github.com/gabehf/koito/internal/logger"
@ -16,6 +17,8 @@ type ImageSource struct {
deezerC *DeezerClient
subsonicEnabled bool
subsonicC *SubsonicClient
lastfmEnabled bool
lastfmC *LastFMClient
caaEnabled bool
}
type ImageSourceOpts struct {
@ -23,6 +26,7 @@ type ImageSourceOpts struct {
EnableCAA bool
EnableDeezer bool
EnableSubsonic bool
EnableLastFM bool
}
var once sync.Once
@ -30,6 +34,7 @@ var imgsrc ImageSource
type ArtistImageOpts struct {
Aliases []string
MBID *uuid.UUID
}
type AlbumImageOpts struct {
@ -55,6 +60,10 @@ func Initialize(opts ImageSourceOpts) {
imgsrc.subsonicEnabled = true
imgsrc.subsonicC = NewSubsonicClient()
}
if opts.EnableLastFM {
imgsrc.lastfmEnabled = true
imgsrc.lastfmC = NewLastFMClient()
}
})
}
@ -65,31 +74,46 @@ func Shutdown() {
func GetArtistImage(ctx context.Context, opts ArtistImageOpts) (string, error) {
l := logger.FromContext(ctx)
if imgsrc.subsonicEnabled {
img, err := imgsrc.subsonicC.GetArtistImage(ctx, opts.Aliases[0])
img, err := imgsrc.subsonicC.GetArtistImage(ctx, opts.MBID, opts.Aliases[0])
if err != nil {
return "", err
}
if img != "" {
l.Debug().Err(err).Msg("GetArtistImage: Could not find artist image from Subsonic")
} else if img != "" {
return img, nil
}
l.Debug().Msg("Could not find artist image from Subsonic")
} else {
l.Debug().Msg("GetArtistImage: Subsonic image fetching is disabled")
}
if imgsrc.deezerC != nil {
if imgsrc.lastfmEnabled {
img, err := imgsrc.lastfmC.GetArtistImage(ctx, opts.MBID, opts.Aliases[0])
if err != nil {
l.Debug().Err(err).Msg("GetArtistImage: Could not find artist image from LastFM")
} else if img != "" {
return img, nil
}
} else {
l.Debug().Msg("GetArtistImage: LastFM image fetching is disabled")
}
if imgsrc.deezerEnabled {
img, err := imgsrc.deezerC.GetArtistImages(ctx, opts.Aliases)
if err != nil {
l.Debug().Err(err).Msg("GetArtistImage: Could not find artist image from Deezer")
return "", err
} else if img != "" {
return img, nil
}
return img, nil
} else {
l.Debug().Msg("GetArtistImage: Deezer image fetching is disabled")
}
l.Warn().Msg("GetArtistImage: No image providers are enabled")
return "", nil
}
func GetAlbumImage(ctx context.Context, opts AlbumImageOpts) (string, error) {
l := logger.FromContext(ctx)
if imgsrc.subsonicEnabled {
img, err := imgsrc.subsonicC.GetAlbumImage(ctx, opts.Artists[0], opts.Album)
img, err := imgsrc.subsonicC.GetAlbumImage(ctx, opts.ReleaseMbzID, opts.Artists[0], opts.Album)
if err != nil {
return "", err
l.Debug().Err(err).Msg("GetAlbumImage: Could not find artist image from Subsonic")
}
if img != "" {
return img, nil
@ -102,29 +126,41 @@ func GetAlbumImage(ctx context.Context, opts AlbumImageOpts) (string, error) {
url := fmt.Sprintf(caaBaseUrl+"/release/%s/front", opts.ReleaseMbzID.String())
resp, err := http.DefaultClient.Head(url)
if err != nil {
return "", err
l.Debug().Err(err).Msg("GetAlbumImage: Could not find artist image from CoverArtArchive with Release MBID")
} else {
if resp.StatusCode == 200 {
return url, nil
} else {
l.Debug().Int("status", resp.StatusCode).Msg("GetAlbumImage: Got non-OK response from CoverArtArchive")
}
}
if resp.StatusCode == 200 {
return url, nil
}
l.Debug().Str("url", url).Str("status", resp.Status).Msg("Could not find album cover from CoverArtArchive with MusicBrainz release ID")
}
if opts.ReleaseGroupMbzID != nil && *opts.ReleaseGroupMbzID != uuid.Nil {
url := fmt.Sprintf(caaBaseUrl+"/release-group/%s/front", opts.ReleaseGroupMbzID.String())
resp, err := http.DefaultClient.Head(url)
if err != nil {
return "", err
l.Debug().Err(err).Msg("GetAlbumImage: Could not find artist image from CoverArtArchive with Release Group MBID")
}
if resp.StatusCode == 200 {
return url, nil
}
l.Debug().Str("url", url).Str("status", resp.Status).Msg("Could not find album cover from CoverArtArchive with MusicBrainz release group ID")
}
}
if imgsrc.lastfmEnabled {
img, err := imgsrc.lastfmC.GetAlbumImage(ctx, opts.ReleaseMbzID, opts.Artists[0], opts.Album)
if err != nil {
l.Debug().Err(err).Msg("GetAlbumImage: Could not find artist image from Subsonic")
}
if img != "" {
return img, nil
}
l.Debug().Msg("Could not find album cover from Subsonic")
}
if imgsrc.deezerEnabled {
l.Debug().Msg("Attempting to find album image from Deezer")
img, err := imgsrc.deezerC.GetAlbumImages(ctx, opts.Artists, opts.Album)
if err != nil {
l.Debug().Err(err).Msg("GetAlbumImage: Could not find artist image from Deezer")
return "", err
}
return img, nil
@ -132,3 +168,23 @@ func GetAlbumImage(ctx context.Context, opts AlbumImageOpts) (string, error) {
l.Warn().Msg("GetAlbumImage: No image providers are enabled")
return "", nil
}
// ValidateImageURL checks if the URL points to a valid image by performing a HEAD request.
func ValidateImageURL(url string) error {
resp, err := http.Head(url)
if err != nil {
return fmt.Errorf("ValidateImageURL: http.Head: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("ValidateImageURL: HEAD request failed, status code: %d", resp.StatusCode)
}
contentType := resp.Header.Get("Content-Type")
if !strings.HasPrefix(contentType, "image/") {
return fmt.Errorf("ValidateImageURL: URL does not point to an image, content type: %s", contentType)
}
return nil
}

298
internal/images/lastfm.go Normal file
View file

@ -0,0 +1,298 @@
package images
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"github.com/gabehf/koito/internal/cfg"
"github.com/gabehf/koito/internal/logger"
"github.com/gabehf/koito/queue"
"github.com/google/uuid"
)
// i told gemini to write this cuz i figured it would be simple enough and
// it looks like it just works? maybe ai is actually worth one quintillion gallons of water
type LastFMClient struct {
apiKey string
baseUrl string
userAgent string
requestQueue *queue.RequestQueue
}
// LastFM JSON structures use "#text" for the value of XML-mapped fields
type lastFMImage struct {
URL string `json:"#text"`
Size string `json:"size"`
}
type lastFMAlbumResponse struct {
Album struct {
Name string `json:"name"`
Image []lastFMImage `json:"image"`
} `json:"album"`
Error int `json:"error"`
Message string `json:"message"`
}
type lastFMArtistResponse struct {
Artist struct {
Name string `json:"name"`
Image []lastFMImage `json:"image"`
} `json:"artist"`
Error int `json:"error"`
Message string `json:"message"`
}
const (
lastFMApiBaseUrl = "http://ws.audioscrobbler.com/2.0/"
)
func NewLastFMClient() *LastFMClient {
ret := new(LastFMClient)
ret.apiKey = cfg.LastFMApiKey()
ret.baseUrl = lastFMApiBaseUrl
ret.userAgent = cfg.UserAgent()
ret.requestQueue = queue.NewRequestQueue(5, 5)
return ret
}
func (c *LastFMClient) queue(ctx context.Context, req *http.Request) ([]byte, error) {
l := logger.FromContext(ctx)
req.Header.Set("User-Agent", c.userAgent)
req.Header.Set("Accept", "application/json")
resultChan := c.requestQueue.Enqueue(func(client *http.Client, done chan<- queue.RequestResult) {
resp, err := client.Do(req)
if err != nil {
l.Debug().Err(err).Str("url", req.URL.String()).Msg("Failed to contact LastFM")
done <- queue.RequestResult{Err: err}
return
}
defer resp.Body.Close()
// LastFM might return 200 OK even for API errors (like "Artist not found"),
// so we rely on parsing the JSON body for logic errors later,
// but we still check for HTTP protocol failures here.
if resp.StatusCode >= 500 {
err = fmt.Errorf("received server error from LastFM: %s", resp.Status)
done <- queue.RequestResult{Body: nil, Err: err}
return
}
body, err := io.ReadAll(resp.Body)
done <- queue.RequestResult{Body: body, Err: err}
})
result := <-resultChan
return result.Body, result.Err
}
func (c *LastFMClient) getEntity(ctx context.Context, params url.Values, result any) error {
l := logger.FromContext(ctx)
// Add standard parameters
params.Set("api_key", c.apiKey)
params.Set("format", "json")
// Construct URL
reqUrl, _ := url.Parse(c.baseUrl)
reqUrl.RawQuery = params.Encode()
l.Debug().Msgf("Sending request to LastFM: GET %s", reqUrl.String())
req, err := http.NewRequest("GET", reqUrl.String(), nil)
if err != nil {
return fmt.Errorf("getEntity: %w", err)
}
l.Debug().Msg("Adding LastFM request to queue")
body, err := c.queue(ctx, req)
if err != nil {
l.Err(err).Msg("LastFM request failed")
return fmt.Errorf("getEntity: %w", err)
}
err = json.Unmarshal(body, result)
if err != nil {
l.Err(err).Msg("Failed to unmarshal LastFM response")
return fmt.Errorf("getEntity: %w", err)
}
return nil
}
// selectBestImage picks the largest available image from the LastFM slice
func (c *LastFMClient) selectBestImage(images []lastFMImage) string {
// Rank preference: mega > extralarge > large > medium > small
// Since LastFM usually returns them in order of size, we could take the last one,
// but a map lookup is safer against API changes.
imgMap := make(map[string]string)
for _, img := range images {
if img.URL != "" {
imgMap[img.Size] = img.URL
}
}
if url, ok := imgMap["mega"]; ok {
if err := ValidateImageURL(overrideImgSize(url)); err == nil {
return overrideImgSize(url)
} else {
return url
}
}
if url, ok := imgMap["extralarge"]; ok {
if err := ValidateImageURL(overrideImgSize(url)); err == nil {
return overrideImgSize(url)
} else {
return url
}
}
if url, ok := imgMap["large"]; ok {
if err := ValidateImageURL(overrideImgSize(url)); err == nil {
return overrideImgSize(url)
} else {
return url
}
}
if url, ok := imgMap["medium"]; ok {
return url
}
if url, ok := imgMap["small"]; ok {
return url
}
return ""
}
// lastfm seems to only return a 300x300 image even for "mega" and "extralarge" images, so I'm cheating
func overrideImgSize(url string) string {
return strings.Replace(url, "300x300", "600x600", 1)
}
func (c *LastFMClient) GetAlbumImage(ctx context.Context, mbid *uuid.UUID, artist, album string) (string, error) {
l := logger.FromContext(ctx)
resp := new(lastFMAlbumResponse)
l.Debug().Msgf("Finding album image for %s from artist %s", album, artist)
// Helper to run the fetch
fetch := func(query paramsBuilder) error {
params := url.Values{}
params.Set("method", "album.getInfo")
query(params)
return c.getEntity(ctx, params, resp)
}
// 1. Try MBID search first
if mbid != nil {
l.Debug().Str("mbid", mbid.String()).Msg("Searching album image by MBID")
err := fetch(func(p url.Values) {
p.Set("mbid", mbid.String())
})
// If success and no API error code
if err == nil && resp.Error == 0 && len(resp.Album.Image) > 0 {
best := c.selectBestImage(resp.Album.Image)
if best != "" {
return best, nil
}
} else if resp.Error != 0 {
l.Debug().Int("api_error", resp.Error).Msg("LastFM MBID lookup failed, falling back to name")
}
}
// 2. Fallback to Artist + Album name match
l.Debug().Str("title", album).Str("artist", artist).Msg("Searching album image by title and artist")
// Clear previous response structure just in case
resp = new(lastFMAlbumResponse)
err := fetch(func(p url.Values) {
p.Set("artist", artist)
p.Set("album", album)
// Auto-correct spelling is useful for name lookups
p.Set("autocorrect", "1")
})
if err != nil {
return "", fmt.Errorf("GetAlbumImage: %v", err)
}
if resp.Error != 0 {
return "", fmt.Errorf("GetAlbumImage: LastFM API error %d: %s", resp.Error, resp.Message)
}
best := c.selectBestImage(resp.Album.Image)
if best == "" {
return "", fmt.Errorf("GetAlbumImage: no suitable image found")
}
return best, nil
}
func (c *LastFMClient) GetArtistImage(ctx context.Context, mbid *uuid.UUID, artist string) (string, error) {
l := logger.FromContext(ctx)
resp := new(lastFMArtistResponse)
l.Debug().Msgf("Finding artist image for %s", artist)
fetch := func(query paramsBuilder) error {
params := url.Values{}
params.Set("method", "artist.getInfo")
query(params)
return c.getEntity(ctx, params, resp)
}
// 1. Try MBID search
if mbid != nil {
l.Debug().Str("mbid", mbid.String()).Msg("Searching artist image by MBID")
err := fetch(func(p url.Values) {
p.Set("mbid", mbid.String())
})
if err == nil && resp.Error == 0 && len(resp.Artist.Image) > 0 {
best := c.selectBestImage(resp.Artist.Image)
if best != "" {
// Validate to match Subsonic implementation behavior
if err := ValidateImageURL(best); err == nil {
return best, nil
}
}
}
}
// 2. Fallback to Artist name
l.Debug().Str("artist", artist).Msg("Searching artist image by name")
resp = new(lastFMArtistResponse)
err := fetch(func(p url.Values) {
p.Set("artist", artist)
p.Set("autocorrect", "1")
})
if err != nil {
return "", fmt.Errorf("GetArtistImage: %v", err)
}
if resp.Error != 0 {
return "", fmt.Errorf("GetArtistImage: LastFM API error %d: %s", resp.Error, resp.Message)
}
best := c.selectBestImage(resp.Artist.Image)
if best == "" {
return "", fmt.Errorf("GetArtistImage: no suitable image found")
}
if err := ValidateImageURL(best); err != nil {
return "", fmt.Errorf("GetArtistImage: failed to validate image url")
}
return best, nil
}
type paramsBuilder func(url.Values)

View file

@ -11,6 +11,7 @@ import (
"github.com/gabehf/koito/internal/cfg"
"github.com/gabehf/koito/internal/logger"
"github.com/gabehf/koito/queue"
"github.com/google/uuid"
)
type SubsonicClient struct {
@ -26,6 +27,8 @@ type SubsonicAlbumResponse struct {
SearchResult3 struct {
Album []struct {
CoverArt string `json:"coverArt"`
Artist string `json:"artist"`
MBID string `json:"musicBrainzId"`
} `json:"album"`
} `json:"searchResult3"`
} `json:"subsonic-response"`
@ -43,7 +46,7 @@ type SubsonicArtistResponse struct {
}
const (
subsonicAlbumSearchFmtStr = "/rest/search3?%s&f=json&query=%s&v=1.13.0&c=koito&artistCount=0&songCount=0&albumCount=1"
subsonicAlbumSearchFmtStr = "/rest/search3?%s&f=json&query=%s&v=1.13.0&c=koito&artistCount=0&songCount=0&albumCount=10"
subsonicArtistSearchFmtStr = "/rest/search3?%s&f=json&query=%s&v=1.13.0&c=koito&artistCount=1&songCount=0&albumCount=0"
subsonicCoverArtFmtStr = "/rest/getCoverArt?%s&id=%s&v=1.13.0&c=koito"
)
@ -106,32 +109,72 @@ func (c *SubsonicClient) getEntity(ctx context.Context, endpoint string, result
return nil
}
func (c *SubsonicClient) GetAlbumImage(ctx context.Context, artist, album string) (string, error) {
func (c *SubsonicClient) GetAlbumImage(ctx context.Context, mbid *uuid.UUID, artist, album string) (string, error) {
l := logger.FromContext(ctx)
resp := new(SubsonicAlbumResponse)
l.Debug().Msgf("Finding album image for %s from artist %s", album, artist)
err := c.getEntity(ctx, fmt.Sprintf(subsonicAlbumSearchFmtStr, c.authParams, url.QueryEscape(artist+" "+album)), resp)
// first try mbid search
if mbid != nil {
l.Debug().Str("mbid", mbid.String()).Msg("Searching album image by MBID")
err := c.getEntity(ctx, fmt.Sprintf(subsonicAlbumSearchFmtStr, c.authParams, url.QueryEscape(mbid.String())), resp)
if err != nil {
return "", fmt.Errorf("GetAlbumImage: %v", err)
}
l.Debug().Any("subsonic_response", resp).Msg("")
if len(resp.SubsonicResponse.SearchResult3.Album) >= 1 {
return cfg.SubsonicUrl() + fmt.Sprintf(subsonicCoverArtFmtStr, c.authParams, url.QueryEscape(resp.SubsonicResponse.SearchResult3.Album[0].CoverArt)), nil
}
}
// else do artist match
l.Debug().Str("title", album).Str("artist", artist).Msg("Searching album image by title and artist")
err := c.getEntity(ctx, fmt.Sprintf(subsonicAlbumSearchFmtStr, c.authParams, url.QueryEscape(album)), resp)
if err != nil {
return "", fmt.Errorf("GetAlbumImage: %v", err)
}
l.Debug().Any("subsonic_response", resp).Send()
if len(resp.SubsonicResponse.SearchResult3.Album) < 1 || resp.SubsonicResponse.SearchResult3.Album[0].CoverArt == "" {
return "", fmt.Errorf("GetAlbumImage: failed to get album art")
l.Debug().Any("subsonic_response", resp).Msg("")
if len(resp.SubsonicResponse.SearchResult3.Album) < 1 {
return "", fmt.Errorf("GetAlbumImage: failed to get album art from subsonic")
}
return cfg.SubsonicUrl() + fmt.Sprintf(subsonicCoverArtFmtStr, c.authParams, url.QueryEscape(resp.SubsonicResponse.SearchResult3.Album[0].CoverArt)), nil
for _, album := range resp.SubsonicResponse.SearchResult3.Album {
if album.Artist == artist {
return cfg.SubsonicUrl() + fmt.Sprintf(subsonicCoverArtFmtStr, c.authParams, url.QueryEscape(resp.SubsonicResponse.SearchResult3.Album[0].CoverArt)), nil
}
}
return "", fmt.Errorf("GetAlbumImage: failed to get album art from subsonic")
}
func (c *SubsonicClient) GetArtistImage(ctx context.Context, artist string) (string, error) {
func (c *SubsonicClient) GetArtistImage(ctx context.Context, mbid *uuid.UUID, artist string) (string, error) {
l := logger.FromContext(ctx)
resp := new(SubsonicArtistResponse)
l.Debug().Msgf("Finding artist image for %s", artist)
// first try mbid search
if mbid != nil {
l.Debug().Str("mbid", mbid.String()).Msg("Searching artist image by MBID")
err := c.getEntity(ctx, fmt.Sprintf(subsonicArtistSearchFmtStr, c.authParams, url.QueryEscape(mbid.String())), resp)
if err != nil {
return "", fmt.Errorf("GetArtistImage: %v", err)
}
l.Debug().Any("subsonic_response", resp).Msg("")
if len(resp.SubsonicResponse.SearchResult3.Artist) < 1 || resp.SubsonicResponse.SearchResult3.Artist[0].ArtistImageUrl == "" {
return "", fmt.Errorf("GetArtistImage: failed to get artist art")
}
// Subsonic seems to have a tendency to return an artist image even though the url is a 404
if err = ValidateImageURL(resp.SubsonicResponse.SearchResult3.Artist[0].ArtistImageUrl); err != nil {
return "", fmt.Errorf("GetArtistImage: failed to get validate image url")
}
}
l.Debug().Str("artist", artist).Msg("Searching artist image by name")
err := c.getEntity(ctx, fmt.Sprintf(subsonicArtistSearchFmtStr, c.authParams, url.QueryEscape(artist)), resp)
if err != nil {
return "", fmt.Errorf("GetArtistImage: %v", err)
}
l.Debug().Any("subsonic_response", resp).Send()
l.Debug().Any("subsonic_response", resp).Msg("")
if len(resp.SubsonicResponse.SearchResult3.Artist) < 1 || resp.SubsonicResponse.SearchResult3.Artist[0].ArtistImageUrl == "" {
return "", fmt.Errorf("GetArtistImage: failed to get artist art")
}
// Subsonic seems to have a tendency to return an artist image even though the url is a 404
if err = ValidateImageURL(resp.SubsonicResponse.SearchResult3.Artist[0].ArtistImageUrl); err != nil {
return "", fmt.Errorf("GetArtistImage: failed to get validate image url")
}
return resp.SubsonicResponse.SearchResult3.Artist[0].ArtistImageUrl, nil
}

View file

@ -85,7 +85,14 @@ func ImportListenBrainzFile(ctx context.Context, store db.DB, mbzc mbz.MusicBrai
}
artistMbzIDs, err := utils.ParseUUIDSlice(payload.TrackMeta.AdditionalInfo.ArtistMBIDs)
if err != nil {
l.Debug().Err(err).Msg("Failed to parse one or more uuids")
l.Debug().AnErr("error", err).Msg("ImportListenBrainzFile: Failed to parse one or more UUIDs")
}
if len(artistMbzIDs) < 1 {
l.Debug().AnErr("error", err).Msg("ImportListenBrainzFile: Attempting to parse artist UUIDs from mbid_mapping")
utils.ParseUUIDSlice(payload.TrackMeta.MBIDMapping.ArtistMBIDs)
if err != nil {
l.Debug().AnErr("error", err).Msg("ImportListenBrainzFile: Failed to parse one or more UUIDs")
}
}
rgMbzID, err := uuid.Parse(payload.TrackMeta.AdditionalInfo.ReleaseGroupMBID)
if err != nil {
@ -93,11 +100,17 @@ func ImportListenBrainzFile(ctx context.Context, store db.DB, mbzc mbz.MusicBrai
}
releaseMbzID, err := uuid.Parse(payload.TrackMeta.AdditionalInfo.ReleaseMBID)
if err != nil {
releaseMbzID = uuid.Nil
releaseMbzID, err = uuid.Parse(payload.TrackMeta.MBIDMapping.ReleaseMBID)
if err != nil {
releaseMbzID = uuid.Nil
}
}
recordingMbzID, err := uuid.Parse(payload.TrackMeta.AdditionalInfo.RecordingMBID)
if err != nil {
recordingMbzID = uuid.Nil
recordingMbzID, err = uuid.Parse(payload.TrackMeta.MBIDMapping.RecordingMBID)
if err != nil {
recordingMbzID = uuid.Nil
}
}
var client string

View file

@ -12,11 +12,5 @@ type Album struct {
ListenCount int64 `json:"listen_count"`
TimeListened int64 `json:"time_listened"`
FirstListen int64 `json:"first_listen"`
AllTimeRank int64 `json:"all_time_rank"`
}
// type SimpleAlbum struct {
// ID int32 `json:"id"`
// Title string `json:"title"`
// VariousArtists bool `json:"is_various_artists"`
// Image uuid.UUID `json:"image"`
// }

View file

@ -12,6 +12,7 @@ type Artist struct {
TimeListened int64 `json:"time_listened"`
FirstListen int64 `json:"first_listen"`
IsPrimary bool `json:"is_primary,omitempty"`
AllTimeRank int64 `json:"all_time_rank"`
}
type SimpleArtist struct {

View file

@ -13,4 +13,5 @@ type Track struct {
AlbumID int32 `json:"album_id"`
TimeListened int64 `json:"time_listened"`
FirstListen int64 `json:"first_listen"`
AllTimeRank int64 `json:"all_time_rank"`
}

View file

@ -134,6 +134,39 @@ func (q *Queries) GetArtist(ctx context.Context, id int32) (GetArtistRow, error)
return i, err
}
const getArtistAllTimeRank = `-- name: GetArtistAllTimeRank :one
SELECT
artist_id,
rank
FROM (
SELECT
x.artist_id,
RANK() OVER (ORDER BY x.listen_count DESC) AS rank
FROM (
SELECT
at.artist_id,
COUNT(*) AS listen_count
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN artist_tracks at ON t.id = at.track_id
GROUP BY at.artist_id
) x
)
WHERE artist_id = $1
`
type GetArtistAllTimeRankRow struct {
ArtistID int32
Rank int64
}
func (q *Queries) GetArtistAllTimeRank(ctx context.Context, artistID int32) (GetArtistAllTimeRankRow, error) {
row := q.db.QueryRow(ctx, getArtistAllTimeRank, artistID)
var i GetArtistAllTimeRankRow
err := row.Scan(&i.ArtistID, &i.Rank)
return i, err
}
const getArtistByImage = `-- name: GetArtistByImage :one
SELECT id, musicbrainz_id, image, image_source FROM artists WHERE image = $1 LIMIT 1
`
@ -221,6 +254,47 @@ func (q *Queries) GetArtistByName(ctx context.Context, alias string) (GetArtistB
return i, err
}
const getArtistsWithoutImages = `-- name: GetArtistsWithoutImages :many
SELECT
id, musicbrainz_id, image, image_source, name
FROM artists_with_name
WHERE image IS NULL
AND id > $2
ORDER BY id ASC
LIMIT $1
`
type GetArtistsWithoutImagesParams struct {
Limit int32
ID int32
}
func (q *Queries) GetArtistsWithoutImages(ctx context.Context, arg GetArtistsWithoutImagesParams) ([]ArtistsWithName, error) {
rows, err := q.db.Query(ctx, getArtistsWithoutImages, arg.Limit, arg.ID)
if err != nil {
return nil, err
}
defer rows.Close()
var items []ArtistsWithName
for rows.Next() {
var i ArtistsWithName
if err := rows.Scan(
&i.ID,
&i.MusicBrainzID,
&i.Image,
&i.ImageSource,
&i.Name,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getReleaseArtists = `-- name: GetReleaseArtists :many
SELECT
a.id, a.musicbrainz_id, a.image, a.image_source, a.name,
@ -269,18 +343,27 @@ func (q *Queries) GetReleaseArtists(ctx context.Context, releaseID int32) ([]Get
const getTopArtistsPaginated = `-- name: GetTopArtistsPaginated :many
SELECT
x.id,
x.name,
x.musicbrainz_id,
x.image,
x.listen_count,
RANK() OVER (ORDER BY x.listen_count DESC) AS rank
FROM (
SELECT
a.id,
a.name,
a.musicbrainz_id,
a.image,
COUNT(*) AS listen_count
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN artist_tracks at ON at.track_id = t.id
JOIN artists_with_name a ON a.id = at.artist_id
WHERE l.listened_at BETWEEN $1 AND $2
GROUP BY a.id, a.name, a.musicbrainz_id, a.image, a.image_source, a.name
ORDER BY listen_count DESC, a.id
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN artist_tracks at ON at.track_id = t.id
JOIN artists_with_name a ON a.id = at.artist_id
WHERE l.listened_at BETWEEN $1 AND $2
GROUP BY a.id, a.name, a.musicbrainz_id, a.image
) x
ORDER BY x.listen_count DESC, x.id
LIMIT $3 OFFSET $4
`
@ -297,6 +380,7 @@ type GetTopArtistsPaginatedRow struct {
MusicBrainzID *uuid.UUID
Image *uuid.UUID
ListenCount int64
Rank int64
}
func (q *Queries) GetTopArtistsPaginated(ctx context.Context, arg GetTopArtistsPaginatedParams) ([]GetTopArtistsPaginatedRow, error) {
@ -319,6 +403,7 @@ func (q *Queries) GetTopArtistsPaginated(ctx context.Context, arg GetTopArtistsP
&i.MusicBrainzID,
&i.Image,
&i.ListenCount,
&i.Rank,
); err != nil {
return nil, err
}

View file

@ -15,11 +15,17 @@ BEGIN
DELETE FROM tracks WHERE id NOT IN (SELECT l.track_id FROM listens l);
DELETE FROM releases WHERE id NOT IN (SELECT t.release_id FROM tracks t);
DELETE FROM artists WHERE id NOT IN (SELECT at.artist_id FROM artist_tracks at);
DELETE FROM artist_releases ar
WHERE NOT EXISTS (
SELECT 1
FROM artist_tracks at
JOIN tracks t ON at.track_id = t.id
WHERE at.artist_id = ar.artist_id
AND t.release_id = ar.release_id
);
END $$
`
// DELETE FROM releases WHERE release_group_id NOT IN (SELECT t.release_group_id FROM tracks t);
// DELETE FROM releases WHERE release_group_id NOT IN (SELECT rg.id FROM release_groups rg);
func (q *Queries) CleanOrphanedEntries(ctx context.Context) error {
_, err := q.db.Exec(ctx, cleanOrphanedEntries)
return err

View file

@ -11,64 +11,57 @@ import (
)
const getGroupedListensFromArtist = `-- name: GetGroupedListensFromArtist :many
WITH artist_listens AS (
WITH bounds AS (
SELECT
l.listened_at
MIN(l.listened_at) AS start_time,
NOW() AS end_time
FROM listens l
JOIN tracks t ON t.id = l.track_id
JOIN artist_tracks at ON at.track_id = t.id
WHERE at.artist_id = $1
),
bounds AS (
stats AS (
SELECT
MIN(listened_at) AS start_time,
MAX(listened_at) AS end_time
FROM artist_listens
start_time,
end_time,
EXTRACT(EPOCH FROM (end_time - start_time)) AS total_seconds,
((end_time - start_time) / $2::int) AS bucket_interval
FROM bounds
),
bucketed AS (
bucket_series AS (
SELECT generate_series(0, $2::int - 1) AS idx
),
listen_indices AS (
SELECT
LEAST(
$2 - 1,
$2::int - 1,
FLOOR(
(
EXTRACT(EPOCH FROM (al.listened_at - b.start_time))
/
NULLIF(EXTRACT(EPOCH FROM (b.end_time - b.start_time)), 0)
) * $2
(EXTRACT(EPOCH FROM (l.listened_at - s.start_time)) / NULLIF(s.total_seconds, 0))
* $2::int
)::int
) AS bucket_idx,
b.start_time,
b.end_time
FROM artist_listens al
CROSS JOIN bounds b
),
aggregated AS (
SELECT
start_time
+ (
bucket_idx * (end_time - start_time)
/ $2
) AS bucket_start,
start_time
+ (
(bucket_idx + 1) * (end_time - start_time)
/ $2
) AS bucket_end,
COUNT(*) AS listen_count
FROM bucketed
GROUP BY bucket_idx, start_time, end_time
) AS bucket_idx
FROM listens l
JOIN tracks t ON t.id = l.track_id
JOIN artist_tracks at ON at.track_id = t.id
CROSS JOIN stats s
WHERE at.artist_id = $1
AND s.start_time IS NOT NULL
)
SELECT
bucket_start::timestamptz,
bucket_end::timestamptz,
listen_count
FROM aggregated
ORDER BY bucket_start
(s.start_time + (s.bucket_interval * bs.idx))::timestamptz AS bucket_start,
(s.start_time + (s.bucket_interval * (bs.idx + 1)))::timestamptz AS bucket_end,
COUNT(li.bucket_idx) AS listen_count
FROM bucket_series bs
CROSS JOIN stats s
LEFT JOIN listen_indices li ON bs.idx = li.bucket_idx
WHERE s.start_time IS NOT NULL
GROUP BY bs.idx, s.start_time, s.bucket_interval
ORDER BY bs.idx
`
type GetGroupedListensFromArtistParams struct {
ArtistID int32
BucketCount interface{}
BucketCount int32
}
type GetGroupedListensFromArtistRow struct {
@ -98,63 +91,55 @@ func (q *Queries) GetGroupedListensFromArtist(ctx context.Context, arg GetGroupe
}
const getGroupedListensFromRelease = `-- name: GetGroupedListensFromRelease :many
WITH artist_listens AS (
WITH bounds AS (
SELECT
l.listened_at
MIN(l.listened_at) AS start_time,
NOW() AS end_time
FROM listens l
JOIN tracks t ON t.id = l.track_id
WHERE t.release_id = $1
),
bounds AS (
stats AS (
SELECT
MIN(listened_at) AS start_time,
MAX(listened_at) AS end_time
FROM artist_listens
start_time,
end_time,
EXTRACT(EPOCH FROM (end_time - start_time)) AS total_seconds,
((end_time - start_time) / $2::int) AS bucket_interval
FROM bounds
),
bucketed AS (
bucket_series AS (
SELECT generate_series(0, $2::int - 1) AS idx
),
listen_indices AS (
SELECT
LEAST(
$2 - 1,
$2::int - 1,
FLOOR(
(
EXTRACT(EPOCH FROM (al.listened_at - b.start_time))
/
NULLIF(EXTRACT(EPOCH FROM (b.end_time - b.start_time)), 0)
) * $2
(EXTRACT(EPOCH FROM (l.listened_at - s.start_time)) / NULLIF(s.total_seconds, 0))
* $2::int
)::int
) AS bucket_idx,
b.start_time,
b.end_time
FROM artist_listens al
CROSS JOIN bounds b
),
aggregated AS (
SELECT
start_time
+ (
bucket_idx * (end_time - start_time)
/ $2
) AS bucket_start,
start_time
+ (
(bucket_idx + 1) * (end_time - start_time)
/ $2
) AS bucket_end,
COUNT(*) AS listen_count
FROM bucketed
GROUP BY bucket_idx, start_time, end_time
) AS bucket_idx
FROM listens l
JOIN tracks t ON t.id = l.track_id
CROSS JOIN stats s
WHERE t.release_id = $1
AND s.start_time IS NOT NULL
)
SELECT
bucket_start::timestamptz,
bucket_end::timestamptz,
listen_count
FROM aggregated
ORDER BY bucket_start
(s.start_time + (s.bucket_interval * bs.idx))::timestamptz AS bucket_start,
(s.start_time + (s.bucket_interval * (bs.idx + 1)))::timestamptz AS bucket_end,
COUNT(li.bucket_idx) AS listen_count
FROM bucket_series bs
CROSS JOIN stats s
LEFT JOIN listen_indices li ON bs.idx = li.bucket_idx
WHERE s.start_time IS NOT NULL
GROUP BY bs.idx, s.start_time, s.bucket_interval
ORDER BY bs.idx
`
type GetGroupedListensFromReleaseParams struct {
ReleaseID int32
BucketCount interface{}
BucketCount int32
}
type GetGroupedListensFromReleaseRow struct {
@ -184,63 +169,55 @@ func (q *Queries) GetGroupedListensFromRelease(ctx context.Context, arg GetGroup
}
const getGroupedListensFromTrack = `-- name: GetGroupedListensFromTrack :many
WITH artist_listens AS (
WITH bounds AS (
SELECT
l.listened_at
MIN(l.listened_at) AS start_time,
NOW() AS end_time
FROM listens l
JOIN tracks t ON t.id = l.track_id
WHERE t.id = $1
),
bounds AS (
stats AS (
SELECT
MIN(listened_at) AS start_time,
MAX(listened_at) AS end_time
FROM artist_listens
start_time,
end_time,
EXTRACT(EPOCH FROM (end_time - start_time)) AS total_seconds,
((end_time - start_time) / $2::int) AS bucket_interval
FROM bounds
),
bucketed AS (
bucket_series AS (
SELECT generate_series(0, $2::int - 1) AS idx
),
listen_indices AS (
SELECT
LEAST(
$2 - 1,
$2::int - 1,
FLOOR(
(
EXTRACT(EPOCH FROM (al.listened_at - b.start_time))
/
NULLIF(EXTRACT(EPOCH FROM (b.end_time - b.start_time)), 0)
) * $2
(EXTRACT(EPOCH FROM (l.listened_at - s.start_time)) / NULLIF(s.total_seconds, 0))
* $2::int
)::int
) AS bucket_idx,
b.start_time,
b.end_time
FROM artist_listens al
CROSS JOIN bounds b
),
aggregated AS (
SELECT
start_time
+ (
bucket_idx * (end_time - start_time)
/ $2
) AS bucket_start,
start_time
+ (
(bucket_idx + 1) * (end_time - start_time)
/ $2
) AS bucket_end,
COUNT(*) AS listen_count
FROM bucketed
GROUP BY bucket_idx, start_time, end_time
) AS bucket_idx
FROM listens l
JOIN tracks t ON t.id = l.track_id
CROSS JOIN stats s
WHERE t.id = $1
AND s.start_time IS NOT NULL
)
SELECT
bucket_start::timestamptz,
bucket_end::timestamptz,
listen_count
FROM aggregated
ORDER BY bucket_start
(s.start_time + (s.bucket_interval * bs.idx))::timestamptz AS bucket_start,
(s.start_time + (s.bucket_interval * (bs.idx + 1)))::timestamptz AS bucket_end,
COUNT(li.bucket_idx) AS listen_count
FROM bucket_series bs
CROSS JOIN stats s
LEFT JOIN listen_indices li ON bs.idx = li.bucket_idx
WHERE s.start_time IS NOT NULL
GROUP BY bs.idx, s.start_time, s.bucket_interval
ORDER BY bs.idx
`
type GetGroupedListensFromTrackParams struct {
ID int32
BucketCount interface{}
BucketCount int32
}
type GetGroupedListensFromTrackRow struct {

View file

@ -141,6 +141,38 @@ func (q *Queries) GetRelease(ctx context.Context, id int32) (GetReleaseRow, erro
return i, err
}
const getReleaseAllTimeRank = `-- name: GetReleaseAllTimeRank :one
SELECT
release_id,
rank
FROM (
SELECT
x.release_id,
RANK() OVER (ORDER BY x.listen_count DESC) AS rank
FROM (
SELECT
t.release_id,
COUNT(*) AS listen_count
FROM listens l
JOIN tracks t ON l.track_id = t.id
GROUP BY t.release_id
) x
)
WHERE release_id = $1
`
type GetReleaseAllTimeRankRow struct {
ReleaseID int32
Rank int64
}
func (q *Queries) GetReleaseAllTimeRank(ctx context.Context, releaseID int32) (GetReleaseAllTimeRankRow, error) {
row := q.db.QueryRow(ctx, getReleaseAllTimeRank, releaseID)
var i GetReleaseAllTimeRankRow
err := row.Scan(&i.ReleaseID, &i.Rank)
return i, err
}
const getReleaseByArtistAndTitle = `-- name: GetReleaseByArtistAndTitle :one
SELECT r.id, r.musicbrainz_id, r.image, r.various_artists, r.image_source, r.title
FROM releases_with_title r
@ -321,17 +353,22 @@ func (q *Queries) GetReleasesWithoutImages(ctx context.Context, arg GetReleasesW
const getTopReleasesFromArtist = `-- name: GetTopReleasesFromArtist :many
SELECT
r.id, r.musicbrainz_id, r.image, r.various_artists, r.image_source, r.title,
COUNT(*) AS listen_count,
get_artists_for_release(r.id) AS artists
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN releases_with_title r ON t.release_id = r.id
JOIN artist_releases ar ON r.id = ar.release_id
WHERE ar.artist_id = $5
AND l.listened_at BETWEEN $1 AND $2
GROUP BY r.id, r.title, r.musicbrainz_id, r.various_artists, r.image, r.image_source
ORDER BY listen_count DESC, r.id
x.id, x.musicbrainz_id, x.image, x.various_artists, x.image_source, x.title, x.listen_count,
get_artists_for_release(x.id) AS artists,
RANK() OVER (ORDER BY x.listen_count DESC) AS rank
FROM (
SELECT
r.id, r.musicbrainz_id, r.image, r.various_artists, r.image_source, r.title,
COUNT(*) AS listen_count
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN releases_with_title r ON t.release_id = r.id
JOIN artist_releases ar ON r.id = ar.release_id
WHERE ar.artist_id = $5
AND l.listened_at BETWEEN $1 AND $2
GROUP BY r.id, r.title, r.musicbrainz_id, r.various_artists, r.image, r.image_source
) x
ORDER BY listen_count DESC, x.id
LIMIT $3 OFFSET $4
`
@ -352,6 +389,7 @@ type GetTopReleasesFromArtistRow struct {
Title string
ListenCount int64
Artists []byte
Rank int64
}
func (q *Queries) GetTopReleasesFromArtist(ctx context.Context, arg GetTopReleasesFromArtistParams) ([]GetTopReleasesFromArtistRow, error) {
@ -378,6 +416,7 @@ func (q *Queries) GetTopReleasesFromArtist(ctx context.Context, arg GetTopReleas
&i.Title,
&i.ListenCount,
&i.Artists,
&i.Rank,
); err != nil {
return nil, err
}
@ -391,15 +430,20 @@ func (q *Queries) GetTopReleasesFromArtist(ctx context.Context, arg GetTopReleas
const getTopReleasesPaginated = `-- name: GetTopReleasesPaginated :many
SELECT
r.id, r.musicbrainz_id, r.image, r.various_artists, r.image_source, r.title,
COUNT(*) AS listen_count,
get_artists_for_release(r.id) AS artists
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN releases_with_title r ON t.release_id = r.id
WHERE l.listened_at BETWEEN $1 AND $2
GROUP BY r.id, r.title, r.musicbrainz_id, r.various_artists, r.image, r.image_source
ORDER BY listen_count DESC, r.id
x.id, x.musicbrainz_id, x.image, x.various_artists, x.image_source, x.title, x.listen_count,
get_artists_for_release(x.id) AS artists,
RANK() OVER (ORDER BY x.listen_count DESC) AS rank
FROM (
SELECT
r.id, r.musicbrainz_id, r.image, r.various_artists, r.image_source, r.title,
COUNT(*) AS listen_count
FROM listens l
JOIN tracks t ON l.track_id = t.id
JOIN releases_with_title r ON t.release_id = r.id
WHERE l.listened_at BETWEEN $1 AND $2
GROUP BY r.id, r.title, r.musicbrainz_id, r.various_artists, r.image, r.image_source
) x
ORDER BY listen_count DESC, x.id
LIMIT $3 OFFSET $4
`
@ -419,6 +463,7 @@ type GetTopReleasesPaginatedRow struct {
Title string
ListenCount int64
Artists []byte
Rank int64
}
func (q *Queries) GetTopReleasesPaginated(ctx context.Context, arg GetTopReleasesPaginatedParams) ([]GetTopReleasesPaginatedRow, error) {
@ -444,6 +489,7 @@ func (q *Queries) GetTopReleasesPaginated(ctx context.Context, arg GetTopRelease
&i.Title,
&i.ListenCount,
&i.Artists,
&i.Rank,
); err != nil {
return nil, err
}

View file

@ -155,22 +155,30 @@ func (q *Queries) GetAllTracksFromArtist(ctx context.Context, artistID int32) ([
const getTopTracksByArtistPaginated = `-- name: GetTopTracksByArtistPaginated :many
SELECT
t.id,
x.track_id AS id,
t.title,
t.musicbrainz_id,
t.release_id,
r.image,
COUNT(*) AS listen_count,
get_artists_for_track(t.id) AS artists
FROM listens l
JOIN tracks_with_title t ON l.track_id = t.id
x.listen_count,
get_artists_for_track(x.track_id) AS artists,
x.rank
FROM (
SELECT
l.track_id,
COUNT(*) AS listen_count,
RANK() OVER (ORDER BY COUNT(*) DESC) as rank
FROM listens l
JOIN artist_tracks at ON l.track_id = at.track_id
WHERE l.listened_at BETWEEN $1 AND $2
AND at.artist_id = $5
GROUP BY l.track_id
ORDER BY listen_count DESC
LIMIT $3 OFFSET $4
) x
JOIN tracks_with_title t ON x.track_id = t.id
JOIN releases r ON t.release_id = r.id
JOIN artist_tracks at ON at.track_id = t.id
WHERE l.listened_at BETWEEN $1 AND $2
AND at.artist_id = $5
GROUP BY t.id, t.title, t.musicbrainz_id, t.release_id, r.image
ORDER BY listen_count DESC, t.id
LIMIT $3 OFFSET $4
ORDER BY x.listen_count DESC, x.track_id
`
type GetTopTracksByArtistPaginatedParams struct {
@ -189,6 +197,7 @@ type GetTopTracksByArtistPaginatedRow struct {
Image *uuid.UUID
ListenCount int64
Artists []byte
Rank int64
}
func (q *Queries) GetTopTracksByArtistPaginated(ctx context.Context, arg GetTopTracksByArtistPaginatedParams) ([]GetTopTracksByArtistPaginatedRow, error) {
@ -214,6 +223,7 @@ func (q *Queries) GetTopTracksByArtistPaginated(ctx context.Context, arg GetTopT
&i.Image,
&i.ListenCount,
&i.Artists,
&i.Rank,
); err != nil {
return nil, err
}
@ -227,21 +237,30 @@ func (q *Queries) GetTopTracksByArtistPaginated(ctx context.Context, arg GetTopT
const getTopTracksInReleasePaginated = `-- name: GetTopTracksInReleasePaginated :many
SELECT
t.id,
x.track_id AS id,
t.title,
t.musicbrainz_id,
t.release_id,
r.image,
COUNT(*) AS listen_count,
get_artists_for_track(t.id) AS artists
FROM listens l
JOIN tracks_with_title t ON l.track_id = t.id
x.listen_count,
get_artists_for_track(x.track_id) AS artists,
x.rank
FROM (
SELECT
l.track_id,
COUNT(*) AS listen_count,
RANK() OVER (ORDER BY COUNT(*) DESC) as rank
FROM listens l
JOIN tracks t ON l.track_id = t.id
WHERE l.listened_at BETWEEN $1 AND $2
AND t.release_id = $5
GROUP BY l.track_id
ORDER BY listen_count DESC
LIMIT $3 OFFSET $4
) x
JOIN tracks_with_title t ON x.track_id = t.id
JOIN releases r ON t.release_id = r.id
WHERE l.listened_at BETWEEN $1 AND $2
AND t.release_id = $5
GROUP BY t.id, t.title, t.musicbrainz_id, t.release_id, r.image
ORDER BY listen_count DESC, t.id
LIMIT $3 OFFSET $4
ORDER BY x.listen_count DESC, x.track_id
`
type GetTopTracksInReleasePaginatedParams struct {
@ -260,6 +279,7 @@ type GetTopTracksInReleasePaginatedRow struct {
Image *uuid.UUID
ListenCount int64
Artists []byte
Rank int64
}
func (q *Queries) GetTopTracksInReleasePaginated(ctx context.Context, arg GetTopTracksInReleasePaginatedParams) ([]GetTopTracksInReleasePaginatedRow, error) {
@ -285,6 +305,7 @@ func (q *Queries) GetTopTracksInReleasePaginated(ctx context.Context, arg GetTop
&i.Image,
&i.ListenCount,
&i.Artists,
&i.Rank,
); err != nil {
return nil, err
}
@ -298,20 +319,28 @@ func (q *Queries) GetTopTracksInReleasePaginated(ctx context.Context, arg GetTop
const getTopTracksPaginated = `-- name: GetTopTracksPaginated :many
SELECT
t.id,
x.track_id AS id,
t.title,
t.musicbrainz_id,
t.release_id,
r.image,
COUNT(*) AS listen_count,
get_artists_for_track(t.id) AS artists
FROM listens l
JOIN tracks_with_title t ON l.track_id = t.id
x.listen_count,
get_artists_for_track(x.track_id) AS artists,
x.rank
FROM (
SELECT
track_id,
COUNT(*) AS listen_count,
RANK() OVER (ORDER BY COUNT(*) DESC) as rank
FROM listens
WHERE listened_at BETWEEN $1 AND $2
GROUP BY track_id
ORDER BY listen_count DESC
LIMIT $3 OFFSET $4
) x
JOIN tracks_with_title t ON x.track_id = t.id
JOIN releases r ON t.release_id = r.id
WHERE l.listened_at BETWEEN $1 AND $2
GROUP BY t.id, t.title, t.musicbrainz_id, t.release_id, r.image
ORDER BY listen_count DESC, t.id
LIMIT $3 OFFSET $4
ORDER BY x.listen_count DESC, x.track_id
`
type GetTopTracksPaginatedParams struct {
@ -329,6 +358,7 @@ type GetTopTracksPaginatedRow struct {
Image *uuid.UUID
ListenCount int64
Artists []byte
Rank int64
}
func (q *Queries) GetTopTracksPaginated(ctx context.Context, arg GetTopTracksPaginatedParams) ([]GetTopTracksPaginatedRow, error) {
@ -353,6 +383,7 @@ func (q *Queries) GetTopTracksPaginated(ctx context.Context, arg GetTopTracksPag
&i.Image,
&i.ListenCount,
&i.Artists,
&i.Rank,
); err != nil {
return nil, err
}
@ -399,6 +430,37 @@ func (q *Queries) GetTrack(ctx context.Context, id int32) (GetTrackRow, error) {
return i, err
}
const getTrackAllTimeRank = `-- name: GetTrackAllTimeRank :one
SELECT
id,
rank
FROM (
SELECT
x.id,
RANK() OVER (ORDER BY x.listen_count DESC) AS rank
FROM (
SELECT
t.id,
COUNT(*) AS listen_count
FROM listens l
JOIN tracks_with_title t ON l.track_id = t.id
GROUP BY t.id) x
) y
WHERE id = $1
`
type GetTrackAllTimeRankRow struct {
ID int32
Rank int64
}
func (q *Queries) GetTrackAllTimeRank(ctx context.Context, id int32) (GetTrackAllTimeRankRow, error) {
row := q.db.QueryRow(ctx, getTrackAllTimeRank, id)
var i GetTrackAllTimeRankRow
err := row.Scan(&i.ID, &i.Rank)
return i, err
}
const getTrackByMbzID = `-- name: GetTrackByMbzID :one
SELECT id, musicbrainz_id, duration, release_id, title FROM tracks_with_title
WHERE musicbrainz_id = $1 LIMIT 1

View file

@ -9,20 +9,20 @@ import (
)
type Summary struct {
Title string `json:"title,omitempty"`
TopArtists []*models.Artist `json:"top_artists"` // ListenCount and TimeListened are overriden with stats from timeframe
TopAlbums []*models.Album `json:"top_albums"` // ListenCount and TimeListened are overriden with stats from timeframe
TopTracks []*models.Track `json:"top_tracks"` // ListenCount and TimeListened are overriden with stats from timeframe
MinutesListened int `json:"minutes_listened"`
AvgMinutesPerDay int `json:"avg_minutes_listened_per_day"`
Plays int `json:"plays"`
AvgPlaysPerDay float32 `json:"avg_plays_per_day"`
UniqueTracks int `json:"unique_tracks"`
UniqueAlbums int `json:"unique_albums"`
UniqueArtists int `json:"unique_artists"`
NewTracks int `json:"new_tracks"`
NewAlbums int `json:"new_albums"`
NewArtists int `json:"new_artists"`
Title string `json:"title,omitempty"`
TopArtists []db.RankedItem[*models.Artist] `json:"top_artists"` // ListenCount and TimeListened are overriden with stats from timeframe
TopAlbums []db.RankedItem[*models.Album] `json:"top_albums"` // ListenCount and TimeListened are overriden with stats from timeframe
TopTracks []db.RankedItem[*models.Track] `json:"top_tracks"` // ListenCount and TimeListened are overriden with stats from timeframe
MinutesListened int `json:"minutes_listened"`
AvgMinutesPerDay int `json:"avg_minutes_listened_per_day"`
Plays int `json:"plays"`
AvgPlaysPerDay float32 `json:"avg_plays_per_day"`
UniqueTracks int `json:"unique_tracks"`
UniqueAlbums int `json:"unique_albums"`
UniqueArtists int `json:"unique_artists"`
NewTracks int `json:"new_tracks"`
NewAlbums int `json:"new_albums"`
NewArtists int `json:"new_artists"`
}
func GenerateSummary(ctx context.Context, store db.DB, userId int32, timeframe db.Timeframe, title string) (summary *Summary, err error) {
@ -37,16 +37,16 @@ func GenerateSummary(ctx context.Context, store db.DB, userId int32, timeframe d
summary.TopArtists = topArtists.Items
// replace ListenCount and TimeListened with stats from timeframe
for i, artist := range summary.TopArtists {
timelistened, err := store.CountTimeListenedToItem(ctx, db.TimeListenedOpts{ArtistID: artist.ID, Timeframe: timeframe})
timelistened, err := store.CountTimeListenedToItem(ctx, db.TimeListenedOpts{ArtistID: artist.Item.ID, Timeframe: timeframe})
if err != nil {
return nil, fmt.Errorf("GenerateSummary: %w", err)
}
listens, err := store.CountListensToItem(ctx, db.TimeListenedOpts{ArtistID: artist.ID, Timeframe: timeframe})
listens, err := store.CountListensToItem(ctx, db.TimeListenedOpts{ArtistID: artist.Item.ID, Timeframe: timeframe})
if err != nil {
return nil, fmt.Errorf("GenerateSummary: %w", err)
}
summary.TopArtists[i].TimeListened = timelistened
summary.TopArtists[i].ListenCount = listens
summary.TopArtists[i].Item.TimeListened = timelistened
summary.TopArtists[i].Item.ListenCount = listens
}
topAlbums, err := store.GetTopAlbumsPaginated(ctx, db.GetItemsOpts{Page: 1, Limit: 5, Timeframe: timeframe})
@ -56,16 +56,16 @@ func GenerateSummary(ctx context.Context, store db.DB, userId int32, timeframe d
summary.TopAlbums = topAlbums.Items
// replace ListenCount and TimeListened with stats from timeframe
for i, album := range summary.TopAlbums {
timelistened, err := store.CountTimeListenedToItem(ctx, db.TimeListenedOpts{AlbumID: album.ID, Timeframe: timeframe})
timelistened, err := store.CountTimeListenedToItem(ctx, db.TimeListenedOpts{AlbumID: album.Item.ID, Timeframe: timeframe})
if err != nil {
return nil, fmt.Errorf("GenerateSummary: %w", err)
}
listens, err := store.CountListensToItem(ctx, db.TimeListenedOpts{AlbumID: album.ID, Timeframe: timeframe})
listens, err := store.CountListensToItem(ctx, db.TimeListenedOpts{AlbumID: album.Item.ID, Timeframe: timeframe})
if err != nil {
return nil, fmt.Errorf("GenerateSummary: %w", err)
}
summary.TopAlbums[i].TimeListened = timelistened
summary.TopAlbums[i].ListenCount = listens
summary.TopAlbums[i].Item.TimeListened = timelistened
summary.TopAlbums[i].Item.ListenCount = listens
}
topTracks, err := store.GetTopTracksPaginated(ctx, db.GetItemsOpts{Page: 1, Limit: 5, Timeframe: timeframe})
@ -75,16 +75,16 @@ func GenerateSummary(ctx context.Context, store db.DB, userId int32, timeframe d
summary.TopTracks = topTracks.Items
// replace ListenCount and TimeListened with stats from timeframe
for i, track := range summary.TopTracks {
timelistened, err := store.CountTimeListenedToItem(ctx, db.TimeListenedOpts{TrackID: track.ID, Timeframe: timeframe})
timelistened, err := store.CountTimeListenedToItem(ctx, db.TimeListenedOpts{TrackID: track.Item.ID, Timeframe: timeframe})
if err != nil {
return nil, fmt.Errorf("GenerateSummary: %w", err)
}
listens, err := store.CountListensToItem(ctx, db.TimeListenedOpts{TrackID: track.ID, Timeframe: timeframe})
listens, err := store.CountListensToItem(ctx, db.TimeListenedOpts{TrackID: track.Item.ID, Timeframe: timeframe})
if err != nil {
return nil, fmt.Errorf("GenerateSummary: %w", err)
}
summary.TopTracks[i].TimeListened = timelistened
summary.TopTracks[i].ListenCount = listens
summary.TopTracks[i].Item.TimeListened = timelistened
summary.TopTracks[i].Item.ListenCount = listens
}
t1, t2 := db.TimeframeToTimeRange(timeframe)

View file

@ -18,7 +18,7 @@
},
"album": {
"image_url": "https://cdn-images.dzcdn.net/images/cover/1f54d600d0ce5c88a6b2fd75659ec796/1000x1000-000000-80-0-0.jpg",
"mbid": null,
"mbid": "d0ec30bd-7cdc-417c-979d-5a0631b8a161",
"aliases": [
{
"alias": "American Football (LP3)",
@ -70,7 +70,7 @@
},
"album": {
"image_url": "https://cdn-images.dzcdn.net/images/cover/1f54d600d0ce5c88a6b2fd75659ec796/1000x1000-000000-80-0-0.jpg",
"mbid": null,
"mbid": "d0ec30bd-7cdc-417c-979d-5a0631b8a161",
"aliases": [
{
"alias": "American Football (LP3)",
@ -122,7 +122,7 @@
},
"album": {
"image_url": "https://cdn-images.dzcdn.net/images/cover/1f54d600d0ce5c88a6b2fd75659ec796/1000x1000-000000-80-0-0.jpg",
"mbid": null,
"mbid": "d0ec30bd-7cdc-417c-979d-5a0631b8a161",
"aliases": [
{
"alias": "American Football (LP3)",
@ -174,7 +174,7 @@
},
"album": {
"image_url": "https://cdn-images.dzcdn.net/images/cover/1f54d600d0ce5c88a6b2fd75659ec796/1000x1000-000000-80-0-0.jpg",
"mbid": null,
"mbid": "d0ec30bd-7cdc-417c-979d-5a0631b8a161",
"aliases": [
{
"alias": "American Football (LP3)",
@ -226,7 +226,7 @@
},
"album": {
"image_url": "https://cdn-images.dzcdn.net/images/cover/1f54d600d0ce5c88a6b2fd75659ec796/1000x1000-000000-80-0-0.jpg",
"mbid": null,
"mbid": "d0ec30bd-7cdc-417c-979d-5a0631b8a161",
"aliases": [
{
"alias": "American Football (LP3)",
@ -278,7 +278,7 @@
},
"album": {
"image_url": "https://cdn-images.dzcdn.net/images/cover/1f54d600d0ce5c88a6b2fd75659ec796/1000x1000-000000-80-0-0.jpg",
"mbid": null,
"mbid": "d0ec30bd-7cdc-417c-979d-5a0631b8a161",
"aliases": [
{
"alias": "American Football (LP3)",
@ -330,7 +330,7 @@
},
"album": {
"image_url": "https://cdn-images.dzcdn.net/images/cover/1f54d600d0ce5c88a6b2fd75659ec796/1000x1000-000000-80-0-0.jpg",
"mbid": null,
"mbid": "d0ec30bd-7cdc-417c-979d-5a0631b8a161",
"aliases": [
{
"alias": "American Football (LP3)",
@ -703,4 +703,4 @@
]
}
]
}
}

Binary file not shown.