# 离线数据处理作答
*数据分析案例*
涵盖了数据抽取,数据清洗与指标计算的题目以及作答详情
## 目录
[TOC]
## 开始

### 一 数据抽取
#### 第一题
1.题目、 抽取MySQL的shtd_industry库中ChangeRecord表的全量数据进入Hudi的hudi_gy_ods库中表 changerecord,字段排序、类型不变,分区字段为 etldate,类型为String,且值为当前比赛日的前一天日期(分区字段格式为yyyyMMdd)。PRECOMBINE_FIELD 使用 ChangeEndTime,ChangeID 和 ChangeMachineID 作为联合主键。使用spark-sql的cli执行 `select count(*) from hudi_gy_ods.changerecord;` 命令。
##### 数据准备
在这里我们针对MySQL中的数据进行一个展示
```
mysql> select * from changerecord;
+----------+-----------------+---------------------+---------------------+
| ChangeID | ChangeMachineID | ChangeEndTime | etldate |
+----------+-----------------+---------------------+---------------------+
| 1 | 1 | 2023-11-26 19:36:55 | 2023-11-26 19:36:55 |
+----------+-----------------+---------------------+---------------------+
1 row in set (0.00 sec)
```
首先需要导入我们的依赖
```xml
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>com.mysql</groupId>
<artifactId>mysql-connector-j</artifactId>
<version>8.0.33</version>
</dependency>
<!--连接Hive 需要的包,同时,读取Hudi parquet格式数据,也需要用到这个包中的parqurt相关类 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<!-- 连接Hive 驱动包-->
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>1.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.hudi</groupId>
<artifactId>hudi-spark-bundle_2.12</artifactId>
<version>0.14.0</version>
</dependency>
<dependency>
<groupId>org.apache.hudi</groupId>
<artifactId>hudi-hadoop-mr</artifactId>
<version>0.14.0</version>
</dependency>
</dependencies>
```
##### 创建HUDI表
```
create database hudi_gy_ods;
use hudi_gy_ods;
-- 创建用于存储数据的数据表
CREATE TABLE `changerecord`
(
ChangeID INT,
ChangeMachineID INT,
ChangeEndTime TIMESTAMP,
etldate TIMESTAMP,
PRIMARY KEY (ChangeID, ChangeMachineID) NOT ENFORCED
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION '/user/hive/warehouse/hudi_gy_ods.db/changerecord'
```
##### 数据抽取
```scala
package run
import org.apache.spark.sql.{SaveMode, SparkSession}
object MyClass1 {
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder().master("local").appName("insertDataToHudi")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.getOrCreate()
// 准备 mysql 中的数据
spark.read
.format("jdbc")
.option("url", "jdbc:mysql://localhost:38243/shtd_industry")
.option("user", "root")
.option("password", "38243824")
.option("dbtable", "shtd_industry.changerecord")
.load().createTempView("myTable")
// 开始按照题目进行 etldate 的时间转换 设比赛时间的前一天 20231125
spark.sql(
"""
|select ChangeID, ChangeMachineID, ChangeEndTime, 20231125 as etldate from myTable;
|""".stripMargin
).write.format("org.apache.hudi")
.mode(SaveMode.Overwrite)
.option("hoodie.table.name", "hudi_gy_ods.changerecord")
.save("hdfs://192.168.0.141/user/hive/warehouse/hudi_gy_ods.db/changerecord")
}
}
```
##### sparkSql 客户端执行命令验证结果
```
spark-sql> select count(*) from hudi_gy_ods.changerecord;
23/11/26 20:01:11 INFO InMemoryFileIndex: It took 77 ms to list leaf files for 1 paths.
23/11/26 20:01:11 INFO FileSourceStrategy: Pruning directories with:
23/11/26 20:01:11 INFO FileSourceStrategy: Pushed Filters:
23/11/26 20:01:11 INFO FileSourceStrategy: Post-Scan Filters:
23/11/26 20:01:11 INFO FileSourceStrategy: Output Data Schema: struct<>
23/11/26 20:01:11 INFO CodeGenerator: Code generated in 19.251359 ms
23/11/26 20:01:11 INFO CodeGenerator: Code generated in 21.430745 ms
23/11/26 20:01:11 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 311.0 KiB, free 366.0 MiB)
23/11/26 20:01:12 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 28.3 KiB, free 366.0 MiB)
23/11/26 20:01:12 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on liming141:37731 (size: 28.3 KiB, free: 366.3 MiB)
23/11/26 20:01:12 INFO SparkContext: Created broadcast 0 from main at NativeMethodAccessorImpl.java:0
23/11/26 20:01:12 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes.
23/11/26 20:01:12 INFO SparkContext: Starting job: main at NativeMethodAccessorImpl.java:0
23/11/26 20:01:12 INFO DAGScheduler: Registering RDD 3 (main at NativeMethodAccessorImpl.java:0) as input to shuffle 0
23/11/26 20:01:12 INFO DAGScheduler: Got job 0 (main at NativeMethodAccessorImpl.java:0) with 1 output partitions
23/11/26 20:01:12 INFO DAGScheduler: Final stage: ResultStage 1 (main at NativeMethodAccessorImpl.java:0)
23/11/26 20:01:12 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
23/11/26 20:01:12 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)
23/11/26 20:01:12 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at main at NativeMethodAccessorImpl.java:0), which has no missing parents
23/11/26 20:01:12 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 14.6 KiB, free 366.0 MiB)
23/11/26 20:01:12 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 6.9 KiB, free 365.9 MiB)
23/11/26 20:01:12 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on liming141:37731 (size: 6.9 KiB, free: 366.3 MiB)
23/11/26 20:01:12 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1200
23/11/26 20:01:12 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at main at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0))
23/11/26 20:01:12 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
23/11/26 20:01:12 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, liming141, executor driver, partition 0, ANY, 7826 bytes)
23/11/26 20:01:12 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
23/11/26 20:01:12 INFO Executor: Fetching spark://liming141:39039/jars/org.apache.hudi_hudi-spark3.0-bundle_2.12-0.14.0.jar with timestamp 1700999943257
23/11/26 20:01:12 INFO TransportClientFactory: Successfully created connection to liming141/192.168.0.141:39039 after 66 ms (0 ms spent in bootstraps)
23/11/26 20:01:12 INFO Utils: Fetching spark://liming141:39039/jars/org.apache.hudi_hudi-spark3.0-bundle_2.12-0.14.0.jar to /tmp/spark-b0a1b2e4-8469-4eae-a01e-76cb07e84b60/userFiles-fc3ae0d8-002e-474e-ad67-9e9541856032/fetchFileTemp8741618558244195396.tmp
23/11/26 20:01:13 INFO Executor: Adding file:/tmp/spark-b0a1b2e4-8469-4eae-a01e-76cb07e84b60/userFiles-fc3ae0d8-002e-474e-ad67-9e9541856032/org.apache.hudi_hudi-spark3.0-bundle_2.12-0.14.0.jar to class loader
23/11/26 20:01:13 INFO FileScanRDD: Reading File path: hdfs://liming141:8020/user/hive/warehouse/hudi_gy_ods.db/changerecord/9b0a3005-3ec0-4488-af2c-9f6f1fed940a-0_0-8-0_20231126200006310.parquet, range: 0-435101, partition values: [empty row]
23/11/26 20:01:14 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2057 bytes result sent to driver
23/11/26 20:01:14 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1384 ms on liming141 (executor driver) (1/1)
23/11/26 20:01:14 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
23/11/26 20:01:14 INFO DAGScheduler: ShuffleMapStage 0 (main at NativeMethodAccessorImpl.java:0) finished in 1.518 s
23/11/26 20:01:14 INFO DAGScheduler: looking for newly runnable stages
23/11/26 20:01:14 INFO DAGScheduler: running: Set()
23/11/26 20:01:14 INFO DAGScheduler: waiting: Set(ResultStage 1)
23/11/26 20:01:14 INFO DAGScheduler: failed: Set()
23/11/26 20:01:14 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[6] at main at NativeMethodAccessorImpl.java:0), which has no missing parents
23/11/26 20:01:14 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 10.1 KiB, free 365.9 MiB)
23/11/26 20:01:14 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 5.0 KiB, free 365.9 MiB)
23/11/26 20:01:14 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on liming141:37731 (size: 5.0 KiB, free: 366.3 MiB)
23/11/26 20:01:14 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:1200
23/11/26 20:01:14 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[6] at main at NativeMethodAccessorImpl.java:0) (first 15 tasks are for partitions Vector(0))
23/11/26 20:01:14 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
23/11/26 20:01:14 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, liming141, executor driver, partition 0, NODE_LOCAL, 7325 bytes)
23/11/26 20:01:14 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
23/11/26 20:01:14 INFO BlockManagerInfo: Removed broadcast_1_piece0 on liming141:37731 in memory (size: 6.9 KiB, free: 366.3 MiB)
23/11/26 20:01:14 INFO ShuffleBlockFetcherIterator: Getting 1 (60.0 B) non-empty blocks including 1 (60.0 B) local and 0 (0.0 B) host-local and 0 (0.0 B) remote blocks
23/11/26 20:01:14 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 36 ms
23/11/26 20:01:14 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 2536 bytes result sent to driver
23/11/26 20:01:14 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 292 ms on liming141 (executor driver) (1/1)
23/11/26 20:01:14 INFO DAGScheduler: ResultStage 1 (main at NativeMethodAccessorImpl.java:0) finished in 0.311 s
23/11/26 20:01:14 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
23/11/26 20:01:14 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
23/11/26 20:01:14 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage finished
23/11/26 20:01:14 INFO DAGScheduler: Job 0 finished: main at NativeMethodAccessorImpl.java:0, took 1.963257 s
1
Time taken: 4.289 seconds, Fetched 1 row(s)
23/11/26 20:01:14 INFO SparkSQLCLIDriver: Time taken: 4.289 seconds, Fetched 1 row(s)
```
#### 第二题
抽取MySQL的shtd_industry库中 BaseMachine 表的全量数据进入Hudi的hudi_gy_ods库中表basemachine,字段排序、类型不变,分区字段为etldate,类型为String,且值为当前比赛日的前一天日期(分区字段格式为yyyyMMdd)。PRECOMBINE_FIELD使用MachineAddDate,BaseMachineID为主键。使用spark-sql的cli执行 `show partitions hudi_gy_ods.basemachine` 命令,将cli的执行结果分别截图粘贴
##### 数据准备
```
mysql> select * from BaseMachine;
+----------+-----------------+---------------------+---------------------+
| BaseMachineID | ChangeMachineID | MachineAddDate | etldate |
+----------+-----------------+---------------------+---------------------+
| 1 | 1 | 2023-11-27 16:43:48 | 2023-11-27 16:43:48 |
+----------+-----------------+---------------------+---------------------+
1 row in set (0.00 sec)
```
##### 创建HUDI表
```
create database hudi_gy_ods;
use hudi_gy_ods;
-- 创建用于存储数据的数据表
CREATE TABLE `basemachine`
(
BaseMachineID INT,
ChangeMachineID INT,
MachineAddDate TIMESTAMP,
PRIMARY KEY (BaseMachineID) NOT ENFORCED
)
PARTITIONED BY (etldate string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION '/user/hive/warehouse/hudi_gy_ods.db/basemachine';
-- 创建分区
alter table hudi_gy_ods.basemachine add if not exists partition(etldate='20231125') location '/user/hive/warehouse/hudi_gy_ods.db/basemachine/20231125';
```
##### 数据抽取
```
package run
import org.apache.spark.sql.{SaveMode, SparkSession}
object MyClass1 {
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder().master("local").appName("insertDataToHudi")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.getOrCreate()
// 准备 mysql 中的数据
spark.read
.format("jdbc")
.option("url", "jdbc:mysql://localhost:38243/shtd_industry")
.option("user", "root")
.option("password", "38243824")
.option("dbtable", "shtd_industry.BaseMachine")
.load().createTempView("myTable")
// 开始按照题目进行 etldate 的时间转换 为当前时间的前一天 20231125
val frame = spark.sql(
"""
|select BaseMachineID, ChangeMachineID, MachineAddDate, '20231125' as etldate from myTable;
|""".stripMargin
)
frame.show()
frame.write.format("org.apache.hudi")
.mode(SaveMode.Overwrite)
.option("hoodie.table.name", "hudi_gy_ods.basemachine")
.save("hdfs://192.168.0.141/user/hive/warehouse/hudi_gy_ods.db/basemachine")
}
}
```
##### sparkSql 客户端执行命令验证结果
```
spark-sql> show partitions hudi_gy_ods.basemachine;
etldate=20231125
Time taken: 0.177 seconds, Fetched 1 row(s)
```
### 数据清洗
编写Scala代码,使用Spark将hudi_gy_ods库中相应表数据全量抽取到Hudi的hudi_gy_dwd库(路径为/user/hive/warehouse/hudi_gy_dwd.db)中对应表中。表中有涉及到timestamp类型的,均要求按照yyyy-MM-ddHH:mm:ss,不记录毫秒数,若原数据中只有年月日,则在时分秒的位置添加00:00:00,添加之后使其符合yyyy-MM-ddHH:mm:ss。
抽取hudi_gy_ods库中changerecord的全量数据进入Hudi的hudi_gy_dwd库中表fact_change_record,分区字段为etldate且值与hudi_gy_ods库的相对应表该值相等,并添加dwd_insert_user、dwd_insert_time、dwd_modify_user、dwd_modify_time四列,其中dwd_insert_user、dwd_modify_user均填写“user1”,dwd_insert_time、dwd_modify_time均填写当前操作时间,并进行数据类型转换。dwd_modify_time作为preCombineField,change_id和change_machine_id作为联合primaryKey。使用spark-sql的cli按照change_machine_id、change_id均为降序排序,查询前1条数据
#### 创建结果数据表
```sql
create database hudi_gy_dwd;;
use hudi_gy_dwd;
-- 创建用于存储数据的数据表
CREATE TABLE `hudi_cow_pt_tbl`
(
change_id INT,
change_machine_id INT,
ChangeEndTime TIMESTAMP,
etldate TIMESTAMP,
dwd_insert_user STRING,
dwd_insert_time STRING,
dwd_modify_user STRING,
dwd_modify_time TIMESTAMP,
PRIMARY KEY (change_id) NOT ENFORCED
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION '/user/hive/warehouse/hudi_gy_dwd.db/hudi_cow_pt_tbl'
```
#### 开始进行数据插入
```Java
package run
import org.apache.spark.sql.functions.to_timestamp
import org.apache.spark.sql.{SaveMode, SparkSession}
import java.sql.Timestamp
object MyClass {
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder().master("local").appName("insertDataToHudi")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.getOrCreate()
// 从hudi中读取数据 读取上一次写在 ods中的数据
val dataFrame1 = spark.read
.format("org.apache.hudi")
.load("/user/hive/warehouse/hudi_gy_ods.db/changerecord")
// 实现字符串转换函数 将 yyyyMMDD 转换为 yyyy-MM-DD 00:00:00 同时 yyyy-MM-DD HH:mm:ss 格式不变
val stringDateFormat = spark.udf.register(
"stringDateFormat",
(s: String) => {
Timestamp.valueOf(if (s.contains('-')) s else s.substring(0, 4) + '-' + s.substring(4, 6) + '-' + s.substring(6, 8) + " 00:00:00")
}
)
// 处理时间数据类型
val dataFrame2 = dataFrame1.select(
dataFrame1("ChangeID") as "change_id",
dataFrame1("ChangeMachineID") as "change_machine_id",
to_timestamp(stringDateFormat(dataFrame1("ChangeEndTime")), "yyyy-MM-DD HH:mm:ss") as "ChangeEndTime",
to_timestamp(stringDateFormat(dataFrame1("etldate")), "yyyy-MM-DD HH:mm:ss") as "etldate"
)
dataFrame2.createTempView("tb2")
dataFrame2.show()
// 追加其它字段
val dataFrame = spark.sql(
"""
|select *,
|'user1' as dwd_insert_user,
|now() as dwd_insert_time,
|'user1' as dwd_modify_user,
| now() as dwd_modify_time
|from tb2;
|""".stripMargin
)
dataFrame.show()
// 开始进行输出数据
dataFrame.write
.format("org.apache.hudi")
.mode(SaveMode.Overwrite)
.option("hoodie.table.name", "hudi_gy_dwd.hudi_cow_pt_tbl")
.save("hdfs://192.168.0.141/user/hive/warehouse/hudi_gy_dwd.db/hudi_cow_pt_tbl")
}
}
```
#### 最后进行结果查询
```scala
package run
import org.apache.spark.sql.SparkSession
object MyClass {
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder().master("local").appName("insertDataToHudi")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.getOrCreate()
spark.read
.format("org.apache.hudi")
.load("/user/hive/warehouse/hudi_gy_dwd.db/hudi_cow_pt_tbl")
.createTempView("tb")
spark.sql("select * from tb order by change_machine_id,change_id desc limit 1").show()
}
}
```
下面是运行结果
```
+-------------------+--------------------+--------------------+----------------------+--------------------+---------+-----------------+-------------------+-------------------+---------------+--------------------+---------------+--------------------+
|_hoodie_commit_time|_hoodie_commit_seqno| _hoodie_record_key|_hoodie_partition_path| _hoodie_file_name|change_id|change_machine_id| ChangeEndTime| etldate|dwd_insert_user| dwd_insert_time|dwd_modify_user| dwd_modify_time|
+-------------------+--------------------+--------------------+----------------------+--------------------+---------+-----------------+-------------------+-------------------+---------------+--------------------+---------------+--------------------+
| 20231126210357474|20231126210357474...|20231126210357474...| |cd506109-ee64-460...| 1| 1|2023-11-26 19:36:55|2023-11-25 00:00:00| user1|2023-11-26 21:04:...| user1|2023-11-26 21:04:...|
+-------------------+--------------------+--------------------+----------------------+--------------------+---------+-----------------+-------------------+-------------------+---------------+--------------------+---------------+--------------------+
```
### 指标计算
#### 前期准备
##### clickHouse 数据库部署
在这里的任务中,任务处理完毕之后会将数据存储到 clickHouse 因此我们需要在这里先将 clickHouse 准备出来,下面是clickHouse 准备完毕之后的测试日志。
参考文章:http://www.lingyuzhao.top/?/linkController=/articleController&link=98235902
```
liming-virtual-machine :) show databases;
SHOW DATABASES
Query id: 7d5b89a9-e75c-4d28-89be-2bc2a7fb37c4
┌─name───────────────┐
│ INFORMATION_SCHEMA │
│ default │
│ information_schema │
│ system │
└────────────────────┘
4 rows in set. Elapsed: 0.003 sec.
liming-virtual-machine :)
```
##### 开发环境-maven
由于我们要使用 jdbc的方式连接 clickhouse 因此在这里需要将下面的包导入到 maven。
```xml
<dependencies>
<dependency>
<groupId>ru.yandex.clickhouse</groupId>
<artifactId>clickhouse-jdbc</artifactId>
<version>0.3.1</version>
</dependency>
</dependencies>
```
##### 使用 spark 写入 clickhouse
在这里我们要测试 spark 将数据写入 clickhouse 的效果,确定clickHouse的JDBC可以使用。
#### 第一题
编写Scala代码,使用Spark根据hudi_gy_dwd层的fact_change_record表统计每个月(change_start_time的月份)、每个设备、每种状态的时长,若某状态当前未结束(即change_end_time值为空)则该状态不参与计算。计算结果存入ClickHouse数据库shtd_industry的machine_state_time表中(表结构如下),然后在Linux的ClickHouse命令行中根据设备id、状态持续时长均为降序排序,查询出前10条,将SQL语句以及执行结果复制粘贴。
要求最终格式如下所示
| 字段 | 类型 | 中文含义 | 备注 |
| :------------: | :------------: | :------------: | :------------: |
| machine_id | int | 设备id | |
| change_record_state | varchar | 状态 | |
| duration_time | varchar | 持续时长(秒) | 当月该状态的时长和 |
| year | int | 年 | 状态产生的年 |
| month | int | 月 | 状态产生的月 |
=未完待续=
------
***操作记录***
作者:[root](http://www.lingyuzhao.top//index.html?search=1 "root")
操作时间:2023-12-24 13:08:17 星期日
事件描述备注:保存/发布
[](如果不需要此记录可以手动删除,每次保存都会自动的追加记录)