sql - 我如何透视 PostgreSQL 表?
问题描述
我有company_representatives
一个看起来像这样的表:
创建表脚本:
CREATE TABLE IF NOT EXISTS company_representatives (
_id integer NOT NULL,
name varchar(50) NOT NULL,
surname varchar(100) NOT NULL,
date_of_join date NOT NULL,
role varchar(250) NOT NULL,
company_id integer NOT NULL,
CONSTRAINT PK_company_representatives PRIMARY KEY ( _id ),
CONSTRAINT FK_144 FOREIGN KEY ( company_id ) REFERENCES companies ( _id )
);
INSERT INTO company_representatives VALUES
(1,'random name','random surname', '2001-01-23', 'CEO', 1),
(2,'next random name','next random surname', '2001-01-23', 'Co-founder', 1),
(3,'John','Doe', '2003-02-12', 'HR', 1),
(4,'Bread','Pitt', '2001-01-23', 'Security officer', 1),
(5,'Toast','Malone', '1997-11-05', 'CEO', 2),
...
我需要旋转此表以使其列看起来像这样:
company_id | CEO | Co-Founder | HR | Security Officer
1 1 2 3 4 "_id of company's representatives"
2 5 6 7 8
3 9 10 11 12
解决方案
您可以FILTER
直接在SELECT
子句中使用:
SELECT DISTINCT ON (company_id)
company_id,
count(*) FILTER (WHERE role = 'CEO') AS CEO,
count(*) FILTER (WHERE role = 'Co-founder') AS "Co-Founder",
count(*) FILTER (WHERE role = 'HR') AS HR,
count(*) FILTER (WHERE role = 'Security officer') AS "Security Officer"
FROM company_representatives
GROUP BY company_id;
有问题的是,不清楚附加到角色的值实际上意味着什么,所以我假设你只是想计算它们。如果没有,只需将其更改为其他聚合函数。
编辑(见评论):使用数据透视表crosstab
,假设所有公司的每个角色都有一个记录:
SELECT *
FROM crosstab(
'SELECT company_id, _id, name
FROM company_representatives ORDER BY company_id,role'
) AS ct(company_id integer,ceo text,co_founder text,hr text,security_officer text);
演示:db<>fiddle
推荐阅读
- python - 为什么通过我的函数传递这个数据帧会改变原始数据帧?
- javascript - 将变量从车把传递到nodejs中的javascript
- python - 基本 Pandas 过滤适用于小数据集,但不适用于大数据集
- azure-service-fabric - 如何使用 keyvault 证书连接到 servicefabric
- javascript - Progressive Web App - 页面无法离线工作错误
- javascript - 引导模板“切换菜单”按钮在 Angular 中不起作用
- r - 更改 data.frame 中的单行名称
- javascript - How to change hidden-input value then send it using django forms?
- python - Keras - TypeError: 'NoneType' object cannot be interpreted as an index
- apache-spark - 在上下文 localhost:18080/sparkhistory 上运行 Spark 历史服务器,而不是在端口 localhost:18080