AutoTimes: Study the Source Code

AutoTime Overview We continue to explore the fusion of LLM and time series. This more recent work reaches new state of art performance on time series forecasting compared to the previous two (TimeLLM and TEST). The repository can be found here . In comparison to previous approaches, the main difference is that AutoTime leverages the autoregressive nature of LLMs, hence it is able to generate predictions of arbitrary lengths. It uses similar patch-based encoding for the time series, but also treats the time stamps text as positional encodings....

07 Dec 2024 · Fred Xu

Text Prototype Aligned Embedding for Time Series: Study the Source Code

Similar to the previous post, I will analyze the source code of the Text Prototype Aligned Embedding for Time Series. The repository can be found here . The main idea of this paper is very similar from TimeLLM, which is to integrate temporal information into the LLM training process. The difference is how the alignment between text and time series is established. Like in the previous post, TimeLLM , I will first analyze the input data and the overall forward pass architecture....

07 Dec 2024 · Fred Xu

TimeLLM: Study the Source Code

Time series forecasting has been a hot topic in the field of deep learning, and there are many interesting models proposed in the last few years. TimeLLM is a novel approach that integrates temporal information into the LLM training process, and it is a very interesting approach. In this post, I will link the paper’s content with the code, and discuss the implementation details. This is helpful for me since I plan to use LLM for time series forecasting in the future....

06 Dec 2024 · Fred Xu