added whole content

This commit is contained in:
Fedor Katurov 2022-11-03 10:38:11 +06:00
parent 1b5df685cb
commit 8b25e0631a
70 changed files with 5962 additions and 19 deletions

View file

@ -0,0 +1,192 @@
- Simple #dapp example for tests: [https://metamask.github.io/test-dapp/](https://metamask.github.io/test-dapp/)
- Interaction with smart contracts described in [Smart contracts](Smart%20contracts.md)
## Connecting to node
If #Metamask extension installed, `Web3.givenProvider` is available in global window. You can use [Infura](https://infura.io) or your node instead:
```typescript
import Web3 from 'web3';
// URL of your node
const PROVIDER_URL = 'https://...';
export const web3 = new Web3(Web3.givenProvider || PROVIDER_URL);
```
## Getting wallet balance
```typescript
const getBalance = async (address: string) => {
return await web3.eth.getBalance(address);
}
```
## Getting wallet address
```typescript
// first we need to authorize
const authorize = async () => {
await web3.currentProvider.request({ method: 'eth_requestAccounts' });
}
// then we can get wallet address
const getCurrentAddressUser = () => {
return web3.currentProvider.selectedAddress;
}
```
## Sending transaction
Sending `value` tokens with `memo` as value:
```typescript
const transfer = async ({
from,
to,
value,
memo,
privateKey,
gasLimit = 44000
}) => {
const nonce = await web3.eth.getTransactionCount(from);
const gasPrice = await web3.eth.getGasPrice();
const rawTx = {
from,
to,
value: web3.utils.toHex(Web3.utils.toWei(value, 'ether')),
gasLimit: web3.utils.toHex(gasLimit),
gasPrice: web3.utils.toHex(gasPrice),
nonce: web3.utils.toHex(nonce),
data: memo,
};
const privateKeyBuffer = EthUtil.toBuffer(privateKey);
const tx = new Transaction(rawTx);
tx.sign(privateKeyBuffer);
const serializedTx = tx.serialize();
return this.web3.eth.sendSignedTransaction(
`0x${serializedTx.toString('hex')}`
);
}
```
## Estimating transaction FEE
Useful to get fixed amount of tokens from user with pre-estimated fee.
```typescript
import { web3 } from '.';
const estimateFee = async ({
from,
to,
value,
memo,
}) => {
const gasPrice = await web3.eth.getGasPrice();
const gasLimit = await web3.eth.estimateGas({
from,
to,
value: web3.utils.toHex(web3.utils.toWei(value, 'ether')),
data: web3.utils.asciiToHex(memo),
}).call();
return web3.utils.fromWei(
BigInt(gasPrice.toString())
.multiply(BigInt(gasLimit.toString()))
.toString()
);
}
```
## Subscribing to wallet address change
```typescript
import { web3 } from '.';
web3.currentProvider.on('accountsChanged', callback);
```
## Watching network change
```typescript
ethereum.on('chainChanged', handler: (chainId: string) => void);
```
## Adding custom token to wallet
```typescript
window.ethereum
.request({
method: 'wallet_watchAsset',
params: {
type: 'ERC20',
options: {
address: '0xb60e8dd61c5d32be8058bb8eb970870f07233155',
symbol: 'FOO',
decimals: 18,
image: 'https://foo.io/token-image.svg',
},
},
})
.then((success) => {
if (success) {
console.log('FOO successfully added to wallet!')
} else {
throw new Error('Something went wrong.')
}
})
.catch(console.error)
```
## Changing network to custom
Checking current chainId:
```typescript
const getChainID = async () => {
return ethereum.request({ method: 'eth_chainId' })
}
```
Asking wallet to change current network:
```typescript
try {
await window.ethereum.request({
method: 'wallet_switchEthereumChain',
params: [{ chainId: '0x03' }], // ropsten chainID (3) in hex
});
} catch (switchError) {
// This error code indicates that the chain has not been added to MetaMask.
if (error.code === 4902) {
try {
await window.ethereum.request({
method: 'wallet_addEthereumChain',
params: [{
chainId: '0x03', // ropsten chainID (3) in hex
chainName: 'Ropsten Test Network',
nativeCurrency: {
name: 'ETH',
symbol: 'ETH',
decimals: 18
},
rpcUrls: ['https://ropsten.infura.io/v3/9aa3d95b3bc440fa88ea12eaa4456161'],
blockExplorerUrls: ['https://ropsten.etherscan.io']
}] ,
});
} catch (addError) {
// handle "add" error
}
}
// handle other "switch" errors
}
```

View file

@ -0,0 +1,248 @@
For common functions see [Common typescript examples](Common%20typescript%20examples.md).
## Getting smart contract instance
Useful for calling smart contract methods:
```typescript
import { Contract } from 'web3-eth-contract';
import { web3 } from '.';
const getContract = (abi: object, address?: string): Contract => {
const abiFromJson = JSON.parse(JSON.stringify(abi));
return new web3.eth.Contract(abiFromJson, address);
};
export default getContract;
```
## Executing contract method
Contract has **read** and **write** methods. To get a list of methods, you can paste contract address on [https://etherscan.io/ ETH](https://etherscan.io/token/0xdac17f958d2ee523a2206206994597c13d831ec7#readContract) or any other service.
**Read** methods doesn't require spending **gas**. **Write** methods cost some amount of **gas**, hence they will be executed with confirmation from user.
### Example for #Metamask without private key
```typescript
// see example below
import { getContract } from '.';
// ABI of contract
const CONTRACT_ABI = { /* ... */ };
// address for contract
const CONTRACT_ADDRESS = '0xdea164f67df4dbfe675d5271c9d404e0260f33bb';
export const executeContractMethod = async ({}) => {
// getting contract
const contract = getContract(CONTRACT_ABI, CONTRACT_ADDRESS);
// Calling write method
try {
// authorizing with Metamask
await web3.currentProvider.request({ method: 'eth_requestAccounts' });
// getting wallet address
const addressUser = web3.currentProvider.selectedAddress;
// calling "store" store method for contract
// payload should include `from` address, that matches
// current user's wallet
await contract.methods.store(0, 'Parameter').send({
from: addressUser,
});
} catch (e) {
throw new Error(e);
}
// calling read method
try {
// this method can return data
const result = await contract.methods.retrieve().call();
} catch (e) {
throw new Error(e);
}
}
```
### Node.js and React Native example
```typescript
// see example below
import { getContract } from '.';
// ABI контракта
const CONTRACT_ABI = { /* ... */ };
// contract address
const CONTRACT_ADDRESS = '0xdea164f67df4dbfe675d5271c9d404e0260f33bb';
// getting contract
const contract = getContract(CONTRACT_ABI, CONTRACT_ADDRESS);
// account's private key
const privateKey = '...';
// write-methods requires private key
const executeContractMethod = async (val: number) => {
const transaction = contract.methods.store(val);
const account = web3.eth.accounts.privateKeyToAccount(privateKey);
const options = {
to: CONTRACT_ADDRESS,
data: transaction.encodeABI(),
gas: await transaction.estimateGas({ from: account.address }),
gasPrice: await web3.eth.getGasPrice(),
};
const signed = await web3.eth.accounts.signTransaction(
options,
privateKey,
);
await web3.eth.sendSignedTransaction(signed.rawTransaction!);
};
```
### Calling a batch of contract's methods
Function calls batch of requests, returning array of results. For example:
```typescript
const requests = [
contract.method.balanceOf().call,
contract.method.getStaked().call
]
const result = await makeBatchRequest(request);
```
```typescript
const web3 = new Web3(Web3.givenProvider || PROVIDER_URL);
const makeBatchRequest = (calls: any[]) => {
try {
const web3 = getWeb3NoAccount();
const batch = new web3.BatchRequest();
const promises = calls.map((call) => {
return new Promise((resolve, reject) => {
batch.add(
call.request({}, (err, result) => {
if (err) {
reject(err);
} else {
resolve(result);
}
})
);
});
});
batch.execute();
return Promise.all(promises);
} catch {
return null;
}
};
export default makeBatchRequest;
```
## Subscribing to smart contract events
There're different ways to subscribe for contract events. For all of them you will need following variables:
```typescript
import Web3 from 'web3';
const web3 = new Web3('YOUR_RPC_ENDPOINT_HERE');
const ABI = 'YOUR ABI HERE';
const CONTRACT_ADDRESS = 'YOUR CONTRACT ADDRESS HERE';
const myContract = new Web3.Contract(ABI, CONTRACT_ADDRESS);
```
### By accessing contract.events
```typescript
referralProgramContract.events
.RegisterUser()
.on('connected', (subscriptionId: string) => {
console.log(`| UserRegistered | events | ${subscriptionId}`);
})
.on(
'data',
async (event: {
removed: boolean;
returnValues: RegisterUserResponseInterface;
}) => {
try {
if (event.removed) {
return;
}
const { user, referrer } = event.returnValues;
console.log(user, referrer);
} catch (e) {
console.log(`| ONCE | ${e}`);
}
},
)
.on('error', (error: ErrnoException) => {
console.log(error);
});
```
### With filtering
We're listening to `Transfer` event here:
```typescript
let options = {
filter: {
value: [],
},
fromBlock: 0
};
myContract.events.Transfer(options)
.on('data', event => console.log(event))
.on('changed', changed => console.log(changed))
.on('error', err => throw err)
.on('connected', str => console.log(str))
```
### Common Subscribe method
Filtering options can also be specified:
```typescript
let options = {
fromBlock: 0,
address: ['address-1', 'address-2'], //Only get events from specific addresses
topics: [] //What topics to subscribe to
};
let subscription = ('logs', options, (err,event) => {
if (!err)
console.log(event)
});
subscription.on('data', event => console.log(event))
subscription.on('changed', changed => console.log(changed))
subscription.on('error', err => { throw err })
subscription.on('connected', nr => console.log(nr))
```
### Getting event history
Getting history for `Transfer` events for specific values. More info can be found [here](https://web3js.readthedocs.io/en/v1.2.11/web3-eth-subscribe.html#)
```typescript
//example options(optional)
let options = {
filter: {
// only get events where transfer value was 1000 or 1337
value: ['1000', '1337']
},
// number | "earliest" | "pending" | "latest"
fromBlock: 0,
toBlock: 'latest'
};
myContract.getPastEvents('Transfer', options)
.then(results => console.log(results))
.catch(err => throw err);
```

View file

@ -0,0 +1,45 @@
Grid, that places items by density. Pure #css solution. Can be used with items, that take different amount of rows/columns.
```scss
$cell: 250px;
$gap: 20px;
.grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax($cell, 1fr));
grid-auto-rows: 256px;
grid-auto-flow: row dense;
grid-column-gap: $gap;
grid-row-gap: $gap;
}
```
### Basic elements with double height or width
```scss
.h-2 { // takes 2 columns
grid-column-end: span 2;
}
.v-2 { // takes 2 rows
grid-row-end: span 2;
}
```
### Header, that fills all columns
```scss
.full-width {
grid-row: 1 / 2; // height: 1 row
grid-column: 1 / -1;
}
```
### Stamp element, that takes 3 rows in the top right corner
```scss
.top-right {
grid-row: 1 / 3; // height here
grid-column: -2 / -1; // width here
}
```

View file

@ -0,0 +1,19 @@
Say, we need to color `n` items by specific colors, which depend on its position. #SCSS supports [iteration over lists](https://sass-lang.com/documentation/at-rules/control/each) for that purposes:
```scss
@mixin color-per-child($colors) {
@each $color in $colors {
&:nth-child(#{index(($colors), ($color))}) {
color: $color;
}
}
}
```
Usage is simple:
```scss
.item {
@include color_per_child((#ded187, #dbde87, #bade87, #9cde87, #87deaa));
}
```

View file

@ -0,0 +1,20 @@
To test if browser supports some #CSS rules, do following:
```css
@supports (backdrop-filter: blur(5px)) {
backdrop-filter: blur(5px);
}
```
This `@mixin` will only apply rule if browser support backdrop filtering:
```scss
@mixin can_backdrop {
@supports (
(-webkit-backdrop-filter: blur(5px)) or
(backdrop-filter: blur(5px))
) {
@content;
}
}
```

View file

@ -0,0 +1,15 @@
Sample #Dockerfile for static Typescript builds such a #nextjs, #gatsby or #nuxt:
```Dockerfile
FROM node:16-alpine as builder
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn
COPY . .
# your generate command here
RUN yarn generate
FROM nginx
COPY --from=builder /app/dist /usr/share/nginx/html
```

View file

@ -0,0 +1,98 @@
Can be used with [Private docker registry](Private%20docker%20registry.md) to deploy things using #docker.
## Pushing to private docker_registry
You should specify `global_docker_login`, `global_docker_password`, `global_docker_registry` organizations variables in your **drone**. And `docker_repo` variable for your repo as `docker.yourdomain.com/your-image`.
This is example of `.droneci` for [private docker registry](Private%20docker%20registry.md):
```yaml
kind: pipeline
name: build
type: docker
platform:
os: linux
arch: amd64
steps:
- name: build-master
image: plugins/docker
when:
branch:
- master
settings:
dockerfile: Dockerfile
tag:
- ${DRONE_BRANCH}
username:
from_secret: global_docker_login
password:
from_secret: global_docker_password
registry:
from_secret: global_docker_registry
repo:
from_secret: docker_repo
```
## Docker-compose file for drone-ci
The `drone` service is ui itself and `drone-agent` is runner for builds, that can be started on different machine (or machines).
Change `secret_id`, `rpc_secret` and `drone.url` to something you like.
```yaml
version: "3"
services:
drone:
container_name: drone
image: drone/drone:latest
environment:
- DRONE_GITHUB_CLIENT_ID=secret_id
- DRONE_GITHUB_CLIENT_SECRET=client_secret
- DRONE_RPC_SECRET=rpc_secret
- DRONE_SERVER_HOST=drone.url
- DRONE_USER_CREATE="username:user,admin:true"
- DRONE_SERVER_PROTO=https
- DRONE_TLS_AUTOCERT=false
- DRONE_GIT_ALWAYS_AUTH=false
- DRONE_LOGS_DEBUG=true
- DRONE_LOGS_TRACE=true
restart: always
volumes:
- ./data:/data
ports:
- 8090:80
drone-agent:
container_name: drone__agent
image: drone/agent:latest
command: agent
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_RPC_SERVER=https://drone.url
- DRONE_RPC_SECRET=rpc_secret
```
## Caching builds
Haven't checked that yet, but there's a [manual](https://laszlo.cloud/the-ultimate-droneci-caching-guide) from [Laszlo Fogas](https://laszlo.cloud/) about that.
## Get user info
```shell
export DRONE_SERVER=https://drone.url
export DRONE_TOKEN=password
drone info
```
## Mark user as trusted
Sometimes it won't help, then connect to drone database with sqlite and change user's trusted flag to `1`.
```shell
drone repo update $1 --trusted=true && drone repo info $1
```

View file

@ -0,0 +1,44 @@
To deploy github pages with [Drone-ci](Drone-ci.md) you will need `.drone.yml` as specified below. You also should define secrets `github_username` and `github_token` (get it [here](https://github.com/settings/tokens)) in your drone's repository setup.
Github repository should be named as `yourname.github.io` and it could be accessed at https://yourname.github.io/. Otherwise it'll be available at https://yourname.github.io/repo-name/, what you might not like.
You should create branch named `gh-pages` in that repo and setup GH Pages at `https://github.com/<yourusername>/<yourusername>.github.io/settings/pages`.
This config will update `gh-pages` branch in your project, which will contain only generated content. I know, that's bad, but there's no better way to do that with generic drone plugins.
```yaml
kind: pipeline
name: build
type: docker
platform:
os: linux
arch: amd64
steps:
- name: build
image: node:16
commands:
- yarn
- yarn generate
- rm -rf ./docs
- mv ./.output/public ./docs
- touch ./docs/.nojekyll
- name: publish
image: plugins/gh-pages
settings:
target_branch: gh-pages
username:
from_secret: github_username
password:
from_secret: github_token
```
Here we're moving `./.output/public` to `./docs`, because #nuxt creates symlink for `docs` and git can't work with that.
Also we create `.nojekyll` at the root of repo, so github's internal engine won't [ignore files that start with underscore](https://github.blog/2009-12-29-bypassing-jekyll-on-github-pages/).
## Additional reading
- [Drone Github Pages Documentation](https://plugins.drone.io/plugins/gh-pages)
- [Bypassing Jekyll on GitHub Pages](https://github.blog/2009-12-29-bypassing-jekyll-on-github-pages/)

View file

@ -0,0 +1,71 @@
Suitable to work with [Drone-ci](Drone-ci.md) for hosting private #docker images.
## Sample docker-compose for custom docker registry
This one brings up private docker registry with ui. First you'll need to generate password for it:
```shell
docker run \
--entrypoint htpasswd registry:2 \
-Bbn user mypassword > auth/registry.password
```
```yaml
version: "3"
services:
registry:
container_name: docker__registry
image: registry:2
ports:
- 5000:5000
restart: always
environment:
- REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/data
- REGISTRY_AUTH=htpasswd
- REGISTRY_AUTH_HTPASSWD_REALM=Registry
- REGISTRY_AUTH_HTPASSWD_PATH=/auth/registry.password
- REGISTRY_HTTP_SECRET=password
- REGISTRY_STORAGE_DELETE_ENABLED=true
volumes:
- ./registry/auth:/auth
- ./registry/data:/data
ui:
container_name: docker__ui
image: parabuzzle/craneoperator:latest
ports:
- 80:80
restart: always
environment:
- REGISTRY_HOST=registry
- REGISTRY_PORT=5000
- REGISTRY_PROTOCOL=http
- ALLOW_REGISTRY_LOGIN=true
- REGISTRY_ALLOW_DELETE=true
- USERNAME=registry
- PASSWORD=password
```
## Squash layers on registry
Sometimes you need to squash all layers in docker registry to free up disk space.
1. Run this command to mark oldest layers
```shell
# Try this first
docker run \
--rm anoxis/registry-cli \
-r https://registry.url \
-l user:password \
--delete \
--num 2
# Then this
docker run -it \
-v /path/to/registry/data:/registry \
-e REGISTRY_URL=https://registry.url \
-e DRY_RUN="false" \
-e REGISTRY_AUTH="user:password" \
mortensrasmussen/docker-registry-manifest-cleanup
```

View file

@ -0,0 +1,17 @@
## Setting up watchtower
[Watchtower](https://containrrr.dev/watchtower/) will automatically pull updated #docker containers. Can be used with [Private docker registry](Private%20docker%20registry.md) and [Drone-ci](Drone-ci.md).
```yaml
version: "3"
services:
watchtower:
container_name: docker__watchtower
image: v2tec/watchtower
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/user/.docker/config.json:/config.json
command: --interval 60 image_1 image_2
```

View file

@ -0,0 +1,15 @@
If you need to seed `.sql` dump in #docker container, just run this command. Also you can try to [rsync file with SSH](/linux/Rsync%20file%20with%20SSH) to get it from remote host.
```shell
#####
# usage: ./script.sh "/path/to/dump.sql"
#####
DUMP_PATH=$1
CONTAINER="db"
USER=root
PASSWORD=password
DB=database
cat "$DUMP_PATH" | docker exec -i $CONTAINER mysql -u$USER -p$PASSWORD $DB
```

View file

@ -0,0 +1,29 @@
[wait-for-it.sh](https://github.com/vishnubob/wait-for-it) doing a great job of waiting for different services to become alive, but on #MacOs #docker is binding port on container start, seconds before #mysql is ready to accept connections
This script waits for first successful query from database or exits with non-zero status after timeout.
Don't forget to change `$query` for the actually working one.
```shell
# Waits for mysql to become actually available
wait_for_mysql() {
query="SELECT count(*) FROM users"
timeout=180 # 3 minutes limit
i=0
while ! docker exec -it "$1" mysql --user="$2" --password="$3" -e "$query" $4 >/dev/null 2>&1; do
sleep 1;
i=$(($i+1))
if [[ ${i} -ge ${timeout} ]]; then
echo "[Error] can't properly query MySQL after ${i} secs"
exit 1;
fi
done
}
# usage: wait_for_mysql miin-mysql-dev root password database
```
[Wait for redis](Wait%20for%20redis.md)

View file

@ -0,0 +1,24 @@
[wait-for-it.sh](https://github.com/vishnubob/wait-for-it) doing a great job of waiting for different services to become alive, but on #MacOs #docker is binding port on container start, seconds before redis is ready to accept connections
This script waits for first successful ping or exits with non-zero status after 3 minutes.
```shell
# Waits for redis to become actually available
wait_for_redis() {
timeout=180 # 3 minutes
i=0
while ! docker exec -it "$1" redis-cli -h localhost -p 6379 -a "$2" ping | grep "PONG" >/dev/null 2>&1; do
sleep 1;
i=$(($i+1))
if [[ ${i} -ge ${timeout} ]]; then
echo "[Error] can't properly ping Redis container after ${i} secs"
exit 1;
fi
done
}
# usage: wait_for_redis miin-redis-dev password
```
[Wait for mysql](Wait%20for%20mysql.md)

View file

@ -0,0 +1,48 @@
Use #oauth2 login with React-Native
## Common OAuth2 providers
Can be handled by [react-native-app-auth](react-native-app-auth) by redirecting to url `com.yourapp://oauth2provider`.
### Example for #Google
```typescript
import { authorize } from 'react-native-app-auth';
const GOOGLE_OAUTH_CLIENT = '...';
// ...
const authState = await authorize({
issuer: 'https://accounts.google.com',
clientId: `${GOOGLE_OAUTH_CLIENT}.apps.googleusercontent.com`,
redirectUrl: `com.yourapp:/oauth2redirect/google`,
scopes: ['openid', 'profile'],
dangerouslyAllowInsecureHttpRequests: true,
});
```
### Example for #Yandex
```typescript
const YANDEX_OAUTH_CLIENT = '...';
const YANDEX_OAUTH_SECRET = '...'; // better hide it somehow
const APP_ID = 'com.yourapp';
const authState = await authorize({
serviceConfiguration: {
authorizationEndpoint: `https://oauth.yandex.ru/authorize?response_type=code&client_id=${YANDEX_OAUTH_CLIENT}&redirect_uri=${APP_ID}:/oauth2redirect`,
// TODO: replace it with your own backend to secure client_secret:
tokenEndpoint: `https://oauth.yandex.ru/token?grant_type=authorization_code&client_id=${YANDEX_OAUTH_CLIENT}&client_secret=${YANDEX_OAUTH_SECRET}`,
},
clientId: YANDEX_OAUTH_CLIENT,
redirectUrl: `${APP_ID}:/oauth2redirect`,
scopes: ['login:info', 'login:avatar'],
dangerouslyAllowInsecureHttpRequests: true,
});
callback(authState.accessToken);
```
## Apple ID login
[react-native-apple-authentication](https://github.com/invertase/react-native-apple-authentication) has its own [documentation](https://github.com/invertase/react-native-apple-authentication/tree/main/docs) on setting up OAuth using Apple ID.

View file

@ -0,0 +1,60 @@
Sometimes you need to keep scroll position of `FlatList` in React Native after some user interactions.
```typescript
// interact() is doing some stuff, that changes FlatList scroll size
type Props = { interact: () => void; }
const SomeList: FC<Props> = ({ interact }) => {
const scrollPosition = useRef(0);
const scrollHeight = useRef(0);
// set it to `true` before interaction and back to `false` right after
const shouldKeepScrollPosition = useRef(false);
const onScroll = useCallback(
(event: NativeSyntheticEvent<NativeScrollEvent>) => {
scrollPosition.current = event.nativeEvent.contentOffset.y;
},
[],
);
const onContentSizeChange = useCallback((_: number, h: number) => {
if (!shouldKeepScrollPosition.current) {
scrollHeight.current = h;
return;
}
ref.current?.scrollToOffset({
offset: scrollPosition.current + (h - scrollHeight.current),
animated: false,
});
scrollHeight.current = h;
}, []);
// onInteraction wraps interaction to preserve scroll position
const onInteraction = useCallback(
() => {
shouldKeepScrollPosition.current = true;
setTimeout(() => {
interact();
}, 0);
setTimeout(() => {
shouldKeepScrollPosition.current = false;
}, 500);
},
[setSelectedSubThemes],
);
return (
<FlatList
// ...required FlatList options
ref={ref}
onContentSizeChange={onContentSizeChange}
onRefresh={onRefresh}
onScroll={onScroll}
/>
)
}

View file

@ -0,0 +1,65 @@
## Show android logcat
```shell
adb logcat com.application:I "*:S"
```
## Get .apk's SHA-256
```bash
keytool -printcert -jarfile "$1"
```
## Assemble debug release on Android
Packages release with bundled resources.
```shell
npx react-native bundle \
--platform android \
--dev false \
--entry-file index.js \
--bundle-output android/app/src/main/assets/index.android.bundle \
--assets-dest android/app/src/main/res/
cd android && ./gradlew assembleDebug
# do your stuff
./gradlew clean
```
## Send release to Android device
```shell
cd ./android \
&& ./gradlew assembleRelease \
&& adb install ./app/build/outputs/apk/release/app-release.apk
```
## Deep links
- https://zarah.dev/2022/02/08/android12-deeplinks.html
- https://developer.android.com/training/app-links/verify-site-associations#invoke-domain-verification
- https://digitalassetlinks.googleapis.com/v1/statements:list?source.web.site=https://miin.ru&relation=delegate_permission/common.handle_all_urls
### Open deep links
```shell
# ios
xcrun simctl openurl booted $1
# android
adb shell am start -W -a android.intent.action.VIEW -d $1 \
com.application
```
### Reverify links on Android
```shell
PACKAGE="com.application"
adb shell pm set-app-links --package $PACKAGE 0 all && \
adb shell pm verify-app-links --re-verify $PACKAGE
```

View file

@ -0,0 +1,106 @@
`<ApiProvider />` component, that will handle token refresh if needed. Refresh function should, probably, be passed through component props.
```typescript
import axios from "axios";
import React, {
createContext,
FC,
PropsWithChildren,
useCallback,
useContext,
useEffect,
useRef,
} from "react";
interface APIProviderProps extends PropsWithChildren {
tokens: {
access: string;
refresh: string;
};
logout: () => void;
}
const APIContext = createContext({
client: axios.create({
baseURL: process.env.NEXT_PUBLIC_API_ENDPOINT,
}),
});
const APIProvider: FC<APIProviderProps> = ({
tokens,
logout,
children,
}) => {
const client = useRef(
axios.create({
baseURL: process.env.NEXT_PUBLIC_API_ENDPOINT,
})
).current;
const refreshTokens = useCallback<() => string>(() => {
// TODO: implement me
throw new Error("not implemented");
}, []);
useEffect(() => {
if (!tokens.access) {
return;
}
// append `access` token to all requests
const req = client.interceptors.request.use(
async (config) => {
config.headers = {
Authorization: `Bearer ${tokens.access}`,
};
return config;
},
(error) => {
Promise.reject(error);
}
);
// refreshing interceptor
const resp = client.interceptors.response.use(
(response) => {
return response;
},
async function (error) {
const originalRequest = error.config;
if (error.response.status === 401 && !originalRequest._retry) {
originalRequest._retry = true;
const newToken = refreshTokens;
return axios({
...originalRequest,
headers: {
...originalRequest.headers,
Authorization: "Bearer " + newToken,
},
});
}
logout();
return Promise.reject(error);
}
);
return () => {
axios.interceptors.request.eject(req);
axios.interceptors.request.eject(resp);
};
}, [client, tokens.access, tokens.refresh, refreshTokens, logout]);
return (
<APIContext.Provider value={{ client }}>
{children}
</APIContext.Provider>
);
};
export const useAPI = () => useContext(APIContext).client;
export { APIProvider };
```

View file

@ -0,0 +1,32 @@
If you need to cancel some request, use [axios with AbortController](https://axios-http.com/docs/cancellation). Previously axios used cancellation token, but now it's deprecated.
`AbortController` can be used with a multiple requests to cancel them at once.
```typescript
import { useCallback, useRef } from "react";
import axios from 'axios';
const client = axios.create();
export const useGetUsers = () => {
const controller = useRef(new AbortController());
const get = useCallback(async () => {
const result = await client.get("/", {
// params and props here
signal: controller.current.signal,
});
return result.data;
}, []);
const cancel = useCallback(() => {
controller.current.abort();
// controller should be rewritten or all requests will fail
controller.current = new AbortController();
}, [controller]);
return { get, cancel };
};
```

View file

@ -0,0 +1,17 @@
The topic's fully covered in the [official documentation](https://vuejs.org/guide/typescript/options-api.html#augmenting-global-properties) and in [Add global variable to window](Add%20global%20variable%20to%20window.md).
For example, you want to add global `$http` and `$translate` services to all of project's components:
```typescript
// ~/index.d.ts or ~/custom.d.ts
import axios from 'axios'
declare module 'vue' {
interface ComponentCustomProperties {
$http: typeof axios
$translate: (key: string) => string
}
}
```

View file

@ -0,0 +1,16 @@
By default [Nuxt Content Plugin](https://content.nuxtjs.org) not handling `==highlight==` links. To fix that we will create `Nitro` plugin:
```typescript
// ~/server/plugins/highlight.ts
export default defineNitroPlugin((nitroApp) => {
nitroApp.hooks.hook("content:file:beforeParse", (file) => {
if (file._id.endsWith(".md")) {
file.body = file.body.replace(
/==([^=]+)==/gs,
`<span class="highlight">$1</span>`
);
}
});
});
```

View file

@ -0,0 +1,8 @@
Forces #git to use https even if remote url is #SSH. Useful for the networks with blocked #ssh protocol.
Put this inside your `~/.gitconfig`:
```c
[url "https://github.com"]
insteadOf = git://github.com
```

View file

@ -0,0 +1,30 @@
Shorthands for #git commands can be specified. Should be placed at `~/.gitconfig`.
```c
[alias]
flush = git clean-branches branch | grep -v master | xargs git branch -D
lol = log --oneline --graph
l = lol
c = commit -am
cv = commit --no-verify -am
p = push
pf = p --force-with-lease
ignore-now = update-index --skip-worktree
```
| **command** | **description** |
|---|---|
| `git flush` | drops all branches, except master |
| `git lol` | shows log |
|`git c` | commits with message |
| `git cv` | commits without hooks |
| `git p` | pushes |
| `git pf` | push with --force and additional check |
| `git ignore-now` | starts ignoring file from now on |

View file

@ -0,0 +1,79 @@
Say, we have `gql` response like this and we wan't to have pagination with it. Let's merge it as it specified in [official documentation](https://www.apollographql.com/docs/react/caching/cache-field-behavior/#the-merge-function)
```graphql
query listItems(
filter: Filter,
sort: String,
limit: Number,
offset: Number,
): ItemList!
input Filter {
name: String!
type: String!
}
type ItemList {
items: [Item!]!
totalCount: Int!
}
```
We will setup `ApolloClient` with `typePolicies` to merge incoming data in cache:
```typescript
import { ApolloClient, InMemoryCache } from '@apollo/client';
const client = new ApolloClient({
// ...
cache: new InMemoryCache({ typePolicies }),
// ...
});
export const typePolicies: TypePolicies = {
Query: {
fields: {
// query name
listItems: {
// apollo will serialize and use keyArgs as unique
// identifier in cache for every query
// consider choosing the right fields,
// i.e. limit and offset won't work here
keyArgs: [
'sort', // primitive type
'filter', ['name', 'type'] // nested fields of `filter`
],
merge: mergeItemsWithTotalCount,
},
}
}
```
We will need merge function `mergeItemsWithTotalCount`, which will join results of query and cached data for specific key:
```typescript
/** merges all sources with { items: unknown[], totalCount: number } */
const mergeItemsWithTotalCount = (existing, incoming, { args }) => {
// no existing data
if (!existing || !args?.offset || args.offset < existing.length) {
return incoming || [];
}
// If hook was called multiple times
if (existing?.items?.length && args?.offset < existing.items.length) {
return existing || [];
}
// merge cache and incoming data
const items = [...(existing?.items || []), ...(incoming?.items || [])];
// apply latest result for totalCount
const totalCount = incoming?.totalCount || existing?.totalCount;
return {
...(incoming || existing || {}),
items,
totalCount,
};
};
```

View file

@ -0,0 +1,157 @@
If your GraphQL api needs token refresh option, you can pass custom fetch function for Apollo Client.
```typescript
export const createApolloClient = (
url: string,
logout: () => void,
getAuthorizationData: () => { authorization: string },
refreshToken: () => Promise<
{ accessToken: string; refreshToken: string } | undefined
>,
) =>
new ApolloClientBase({
// ...other options
link: ApolloLink.from([
// ...other options
setContext(async (_, { headers }) => {
return {
headers: {
...headers,
...getAuthorizationData(),
},
};
}),
new HttpLink({
uri: url,
fetch: fetchWithTokenRefresh(logout, refreshToken),
}),
]),
});
```
Custom fetch function for this request. You should tune `hasUnauthorizedError` and
`isRefreshRequestOptions` to match your api.
```typescript
/** Global singleton for refreshing promise */
let refreshingPromise: Promise<string> | null = null;
/** Checks if GraphQl errors has unauthenticated error */
const hasUnauthorizedError = (errors: Array<{ code?: ErrorCode }>): boolean =>
Array.isArray(errors) &&
errors.some(error => {
return error.status === 401; // Distinguish unauthorized error here
});
/** Detects if customFetch is sending refresh request */
const isRefreshRequestOptions = (options: RequestInit) => {
try {
const body = JSON.parse(options?.body as string);
return body.operationName === 'RefreshToken';
} catch (e) {
return false;
}
};
/** fetchWithTokenRefresh is a custom fetch function with token refresh for apollo */
export const fetchWithTokenRefresh =
(
logout: () => void,
refreshToken: () => Promise<
{ accessToken: string; refreshToken: string } | undefined
>,
) =>
async (uri: string, options: RequestInit): Promise<Response> => {
// already refreshing token, wait for it and then use refreshed token
// or use empty authorization if refreshing failed
if (
!isRefreshRequestOptions(options) &&
refreshingPromise &&
(options.headers as Record<string, string>)?.authorization
) {
const newAccessToken = await refreshingPromise
.catch(() => {
// refreshing token from other request failed, retry without authorization
return '';
});
options.headers = {
...(options.headers || {}),
authorization: newAccessToken,
};
}
return fetch(uri, options).then(async response => {
const text = await response.text();
const json = JSON.parse(text);
// check for unauthorized errors, if not present, just return result
if (
isRefreshRequestOptions(options) ||
!json?.errors ||
!Array.isArray(json.errors) ||
!hasUnauthorizedError(json.errors)
) {
return {
...response,
ok: true,
json: async () =>
new Promise<unknown>(resolve => {
resolve(json);
}),
text: async () =>
new Promise<string>(resolve => {
resolve(text);
}),
};
}
// If unauthorized, refresh token and try again
if (!refreshingPromise) {
refreshingPromise = refreshToken()
.then(async (tokens): Promise<string> => {
refreshingPromise = null;
if (!tokens?.accessToken) {
throw new Error('Session expired');
}
return tokens?.accessToken;
})
.catch(() => {
refreshingPromise = null;
// can't refresh token. logging out
logout();
throw new Error('Session expired');
});
}
// success or any non-auth error
return refreshingPromise
.then(async (newAccessToken: string) => {
// wait for other request's refreshing query to finish, when retry
return fetch(uri, {
...options,
headers: {
...(options.headers || {}),
authorization: newAccessToken,
},
});
})
.catch(async () => {
// refreshing token from other request failed, retry without authorization
return fetch(uri, {
...options,
headers: {
...(options.headers || {}),
authorization: '',
},
});
});
});
};
```

View file

@ -0,0 +1,29 @@
Self-hosted #git repositories with [gitea](https://gitea.io/ru-ru/) and #docker.
## Setting up with docker-compose
```yaml
version: "3"
networks:
gitea:
external: false
services:
server:
image: gitea/gitea:latest
container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
restart: always
networks:
- gitea
volumes:
- ./var/lib/gitea:/data
- ./etc/gitea:/etc/gitea
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "222:22"
```

View file

@ -0,0 +1,70 @@
[Photo Prism](https://photoprism.app/) is a free alternative to Google photos, can be set up with #docker.
## Docker compose file to run it
Check out current [example](https://dl.photoprism.app/docker/docker-compose.yml) at photoprism's [documentation](https://docs.photoprism.app/getting-started/docker-compose/).
```yaml
version: '3.5'
services:
photoprism:
container_name: photoprism__app
image: photoprism/photoprism:latest
depends_on:
- mariadb
restart: unless-stopped
security_opt:
- seccomp:unconfined
- apparmor:unconfined
ports:
- 2342:2342 # HTTP port (host:container)
environment:
PHOTOPRISM_ADMIN_PASSWORD: "password"
PHOTOPRISM_SITE_URL: "https://service.url/"
PHOTOPRISM_ORIGINALS_LIMIT: 5000
PHOTOPRISM_HTTP_COMPRESSION: "gzip"
PHOTOPRISM_DEBUG: "false"
PHOTOPRISM_PUBLIC: "false"
PHOTOPRISM_READONLY: "false"
PHOTOPRISM_EXPERIMENTAL: "false"
PHOTOPRISM_DISABLE_CHOWN: "false"
PHOTOPRISM_DISABLE_WEBDAV: "false"
PHOTOPRISM_DISABLE_SETTINGS: "false"
PHOTOPRISM_DISABLE_TENSORFLOW: "false"
PHOTOPRISM_DISABLE_FACES: "false"
PHOTOPRISM_DISABLE_CLASSIFICATION: "false"
PHOTOPRISM_DARKTABLE_PRESETS: "false"
PHOTOPRISM_DETECT_NSFW: "false"
PHOTOPRISM_UPLOAD_NSFW: "true"
PHOTOPRISM_DATABASE_DRIVER: "mysql"
PHOTOPRISM_DATABASE_SERVER: "mariadb:3306"
PHOTOPRISM_DATABASE_NAME: "photoprism"
PHOTOPRISM_DATABASE_USER: "root"
PHOTOPRISM_DATABASE_PASSWORD: "insecure"
PHOTOPRISM_SITE_TITLE: "PhotoPrism"
PHOTOPRISM_SITE_CAPTION: "Browse Your Life"
PHOTOPRISM_SITE_DESCRIPTION: ""
      PHOTOPRISM_SITE_AUTHOR: ""
      HOME: "/photoprism"
    working_dir: "/photoprism"
    volumes:
      - "./data/originals:/photoprism/originals"    
      - "./data/imports:/photoprism/import"
      - "./data/storage:/photoprism/storage"
  mariadb:
    container_name: photoprism__db
    restart: unless-stopped
    image: mariadb:10.6
    security_opt:
      - seccomp:unconfined
      - apparmor:unconfined
    command: mysqld --innodb-buffer-pool-size=128M --transaction-isolation=READ-COMMITTED --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --max-connections=512 --innodb-rollback-on-timeout=OFF --innodb-lock-wait-timeout=120
    volumes:
      - "./database:/var/lib/mysql" # Important, don't remove
    environment:
      MYSQL_ROOT_PASSWORD: insecure
      MYSQL_DATABASE: photoprism
      MYSQL_USER: photoprism
      MYSQL_PASSWORD: insecure
```

View file

@ -0,0 +1,5 @@
Running this script will enter currently running `screen` session or will start new one.
```shell
( screen -r bash || ( screen -d bash && screen -r bash || screen -SAm bash bash ) )
```

View file

@ -0,0 +1,16 @@
Downloads file from #SSH with rsync and puts it in current folder.
```bash
#!/bin/bash
PORT=22
USER=user
HOST=example.com
REMOTE_PATH=/tmp
REMOTE_FILE=sample.text
DEST_PATH=./
rsync -a -e "ssh -p $PORT" -P -v \
"$USER@$HOST:$REMOTE_PATH/$REMOTE_FILE" \
"$DEST_PATH"
```

13
content/Linux/SSH.md Normal file
View file

@ -0,0 +1,13 @@
## Config aliases for #SSH hosts
#SSH config can be used to made aliases for different hosts. Should be put at `~/.ssh/config`. To simply call `ssh router` without parameters, use this:
```
Host router
HostName 192.168.0.1
IdentityFile ~/.ssh/id_rsa
User root
Port 22522
```

View file

@ -0,0 +1,57 @@
## Fallback url for SPA-s
```nginx
server {
# ...
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# ...
}
```
## Set up for uploads
```nginx
server {
# ...
client_max_body_size 200M;
# ...
}
```
## Reverse proxy for https
Given config forwards `https` traffic to `http` on port `8080` for https://next.vault48.org
with http2 support if possible.
```nginx
server {
listen 80;
server_name next.vault48.org;
return 301 https://next.vault48.org$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
# managed by Certbot
ssl_certificate /etc/letsencrypt/live/vault48.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/vault48.org/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/vault48.org/chain.pem;
server_name next.vault48.org;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080;
}
}
```

View file

@ -0,0 +1,37 @@
## Install MariaDB on Ubuntu 20.04 LTS
```bash
sudo apt update
sudo apt install mariadb-server
sudo mysql_secure_installation
```
## Access Database from outside
Open `/etc/mysql/mariadb.conf.d/50-server.cnf` and change the `bind-address` to:
```nginx
...
bind-address = 0.0.0.0
...
```
## Create Administrative User
1. Create a new user `newuser` for the host `localhost` with a new `password`:
```mysql
CREATE USER 'newuser'@'localhost' IDENTIFIED BY 'password';
```
2. Grant all permissions to the new user
```mysql
GRANT ALL PRIVILEGES ON * . * TO 'newuser'@'localhost';
```
3. Update permissions
```mysql
FLUSH PRIVILEGES;
```

View file

@ -0,0 +1,105 @@
## Install PostgreSQL 12 on Ubuntu 20.04 LTS
```bash
sudo apt update
sudo apt install -y postgresql postgresql-contrib postgresql-client
sudo systemctl status postgresql.service
```
## Initial database connection
A local connection (from the database server) can be done by the following command:
```bash
sudo -u postgres psql
psql (12.12 (Ubuntu 12.12-0ubuntu0.20.04.1))
Type "help" for help.
postgres=#
```
## Set password for postgres database user
The password for the `postgres` database user can be set the the quick command `\password`
or by `alter user postgres password 'Supersecret'`. A connection using the `postgres` user
is still not possible from the "outside" hence to the default settings in the `pg_hba.conf`.
### Update pg_hba.conf to allow postgres user connections with password
In order to allow connections of the `postgres` database user not using OS user
authentication, you have to update the `pg_hba.conf` which can be found under
`/etc/postgresql/12/main/pg_hba.conf`.
```shell
sudo vi /etc/postgresql/12/main/pg_hba.conf
...
local all postgres peer
...
```
Change the last section of the above line to `md5`.
```
local all postgres md5
```
A restart is required in order to apply the new configuration:
```bash
sudo systemctl restart postgresql
```
Now a connection from outside the database host is possible e.g.
```bash
psql -U postgres -d postgres -h databasehostname
```
## Creation of additional database users
A database user can be created by the following command:
```sql
create user myuser with encrypted password 'Supersecret';
CREATE ROLE
postgres=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
myuser | | {}
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
```
## Creation of additional databases
One can create new Postgres databases within an instance. Therefore you can use the `psql`
command to login (see above).
```sql
CREATE DATABASE dbname OWNER myuser;
CREATE DATABASE
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
dbname | myuser | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
```
You can leave the `OWNER` section of the command, when doing so, the current user will become
owner of the newly created database.
To change the owner of an existing database later, you can use the following command:
```sql
postgres=# alter database dbname owner to myuser;
ALTER DATABASE
```

View file

@ -0,0 +1,50 @@
Sometimes you want to add global variable to your `window`. That thing's called [global module augmentation](https://www.typescriptlang.org/docs/handbook/declaration-merging.html#global-augmentation).
Say you need to call `window.doFancyThings()`. For that you should augment global `window` interface in `*.d.ts` file:
```typescript
declare global {
interface Window {
doFancyThings: () => void;
}
}
```
This is useful for declaring global `window.ethereum` (or `window.web3`) in [blockchain](/blockchain/Common%20typescript%20examples) projects with typescript, which use wallet browser extensions.
## Augmenting existing interface
For example, you have class `Sample` without any functionality:
```typescript
// Sample.ts
export class Sample {
// nothing :-)
}
```
Then you want extend it with `doFancyThings()` method. That can be achieved with said [module augmentation](https://www.typescriptlang.org/docs/handbook/declaration-merging.html#module-augmentation):
```typescript
// fancyThings.ts
import { Sample } from "./Sample";
declare module "./Sample" {
interface Sample {
doFancyThings: () => void;
}
}
```
Now you can call `sample.doFancyThings()` by importing both `.ts` files:
```typescript
import { Sample } from "./sample";
import "./fancyThings";
const sample = new Sample();
sample.doFancyThings(); // ok
```
This example is useful for [adding global properties to component](../Frontend/Vue/Adding%20global%20properties%20to%20component.md) in vue.js.

View file

@ -0,0 +1,38 @@
This helper generates Typescript types for i18n dictionary json
files by flattening it with period delimiter. Supports plural forms.
Used for typing [i18n.js](https://www.npmjs.com/package/i18n-js) dictionaries;
```typescript
import en from './en.json';
type TranslationPath = Flatten<typeof en>;
const t = (key: TranslationPath, options?: TranslateOptions) =>
I18nLib.t(key, options);
```
Flatten type defined here:
```typescript
// This one based on answer from StackOverflow:
// https://stackoverflow.com/questions/58434389/typescript-deep-keyof-of-a-nested-object
export type Flatten<T, D extends number = 5> = [D] extends [never]
? never
: T extends PluralForm // plural object
? ''
: T extends object
? { [K in keyof T]-?: Join<K, Flatten<T[K], Prev[D]>> }[keyof T]
: '';
// Fix it for you plural form
type PluralForm = Record<'one' | 'few' | 'many', string>;
type Join<K, P> = K extends string | number
? P extends string | number
? `${K}${'' extends P ? '' : '.'}${P}`
: never
: never;
type Prev = [never, 0, 1, 2, 3, 4, 5, ...Array<0>];
```

View file

@ -0,0 +1,20 @@
Useful for type checking at compile and run time:
```typescript
function isFish(pet: Fish | Bird): pet is Fish {
return (pet as Fish).swim !== undefined;
}
```
Usage:
```typescript
const pet = getSmallPet();
if (isFish(pet)) {
pet.swim();
} else {
pet.fly();
}
```