Kylin Page

A fool who dreams.

Intro To SSD and Evaluation

SSD相关研究及测评方法

[TOC] Prolonging 3D NAND SSD Lifetime via Read Latency Relaxation Extended Abstract 架构介绍 https://www.cnblogs.com/whl320124/articles/10063608.html LBA(Logical Block Addressing)是一个在操作系统层面上读写的...

Quantitative Finance Interviews 50

Quant Review 50 Brain Teasers

[TOC] 1. Screwy pirates Five pirates looted a chest full of 100 gold coins. Being a bunch of democratic pirates, they agree on the following method to divide the loot: The most senior pirate wil...

Coding on 滑动窗口

滑动窗口及题单

[TOC] 同向双指针 满足单调性的题目可以用同向双指针 比如209,left移动会从满足要求过渡到不满足要求;right移动会从不满足要求过渡到满足要求; Template 以209为例, 寻找nums中的最短子数组的长度,该子数组的和大于等于target class Solution: def minSubArrayLen(self, target: int, num...

Coding on 二分查找

二分查找技巧及题单

[TOC] 二分查找 只需要把握两个点,直接起飞: 1)left、right写成开区间 2)把握循环不变量 Template left, right = -1, n while left + 1 < right: # 开区间不为空 # 循环不变量: # f(left) >= k # f(right) < k mid = (left + right) /...

Coding on 树上倍增

树上倍增 技巧及题单

[TOC] 树上倍增 适用场景:寻找一棵树的k代祖先,可以类似二次幂的思路。初始化每一个节点的 1,2,4,log(h) 代祖先,这样的话,初始化复杂度为 O(n log h) 。之后寻找k代祖先的单次查找复杂度为 O(log k),适用于多次查询的场景。 Template(以 leetcode 2836 为例) class Solution { public: long l...

C2 Quantitative Factor Stock Selection Strategy

量化因子选股策略

[TOC] Outline 因子挖掘流派不同,ML的人搞黑盒,金融学派追求解释性 在量化公司里会注重:为什么你的 alpha 是 work 的,背后有什么逻辑。 什么是 Alpha? 茅台、Nvidia就是有alpha的,大盘无论涨跌都有超额收益。 股票风险极大,好的时候50% per year 货币、债卷、股票、xx 一个组合投资 smart beta:大盘好/坏 ...

Coding on 分组循环

分组循环技巧及题单

[TOC] 分组循环 适用场景:按照题目要求,数组会被分割成若干段,且每一段的判断/处理逻辑是一样的。 ps. 为了不让代码过于 ugly Template i, n = 0, len(nums) while i < n: start = i while i < n and ...: i += 1 # 从 start 到 i-1 ...

Cheatsheet for Quant Basics

Quant概念速查手册

[TOC] 指标类 OEY Operating earnings yield(经营盈利收益率)表示企业的经营盈利与市场价值之间的比率。 Operating Earnings Yield= Operating Earnings / Market Capitalization Operating Earnings 是公司的经营盈利,有时被称为EBIT(息税前盈利)或经营收入。 ...

CheatSheet for Alpha

有效的Alpha列表

[TOC] 1/close Use inverse of daily close price as stock weights. More allocation of capital on the stocks with lower daily close prices. volume/adv20 Use relative daily volume to the average vo...

PETALS Collaborative Inference and FT of LLMs

Collaborative Inference and Fine-tuning of Large Models

[TOC] ACL 23 PETALS: a system for inference and fine-tuning of large models collaboratively by joining the resources of multiple parties。