Blog

  • hutte-project-template

    This repository contains

    Skeleton SF Project

    Created with sf project generate, with

    • a custom Salesforce theme, a sample Apex class and LWC
    • an exemplatory Hutte configuration file (hutte.yml) which applies the custom Salesforce theme after Scratch Org creation, and creates a custom button to run a data import
    • sample data set to demonstrate data import via Hutte custom button

    Main Hutte Recipes

    VSCode Presets

    • VSCode pre-build settings for editor, PMD and prettier
    • VSCode set of recommended extension for Salesforce development

    Code Quality Tools Configuration

    • Prettier baseline configuration
    • PMD baseline configuration
    • ESLint baseline configuration
    • Precommit Husky hook configured with prettier formating

    Frameworks

    Hutte Apex Collection is a set of useful open source Apex frameworks for Salesforce development, built by contributors in the Salesforce community, see more.

    Notice that these are packaged in an Unlocked package and only require installation, not deployment. A git repository is provided to visualize its content. These are part of an independent repository and not in this repository in order to clean the template repository from unnecessary metadata, have a clean .forceignore and efficiently use git submodules (previous discussion).


    Please check https://docs.hutte.io in order to learn more about setting up and using Hutte.

    Visit original content creator repository
    https://github.com/hutte-recipes/hutte-project-template

  • sugar

    Sugar C

    standard-readme compliant

    A different flavour of Tiny C, it’s sugar!

    This fork of tcc intends to adds some syntactic sweetness to C-like scripts. Also reviewing the basic interface that is offered to the user from the command line, defaulting the compiler to run code on the fly, instead of building a binary.

    Table of Contents

    Install

    You need to make-install Sugar C to get it properly working on your system. Hopefully this will be straight forward, so keep fingers crossed.

    For linux and osx should be as simple as pasting this line into your terminal:

    $ git clone https://github.com/antonioprates/sugar.git && cd sugar && ./install.sh

    If you get permission issues, you can try to sudo the install script also:

    $ sudo ./install.sh

    If you don’t need the source code, you can remove the folder afterwards:

    $ cd .. && rm -rf sugar

    For windows you can clone the repo and give it a try to the win32/build-sugar.bat (ugh!), never tried. Maybe send me a message, I will be happy to know it still works, or if needs any fixes.

    Usage

    The implicit ‘-run’ directive together with the <sugar.h> library, makes c-scripting easier. You can try and inspect the samples like:

    $ sugar hello.c

    Or if given the right permission to the file, and assuming /usr/local/bin/sugar to be sugar install path on your system:

    $ ./hello.c
    Visit original content creator repository https://github.com/antonioprates/sugar
  • silkrouter

    Node.js CI License GitHub file size in bytes GitHub file size in bytes

    Silk router

    Silk router is a reactive and light-weight (1.5kb gzipped) routing library.

    Installation

    npm install --save silkrouter rxjs

    Silk router is dependant on rxjs for classes such as Observable and Subscription. Please install this package as a separate (peer) dependency.

    Usage

    1. Import Router class
    import { Router } from 'silkrouter';
    ...
    1. Create an instance
    const router = new Router();
    1. Add a route handler
    router.subscribe((e) => {
      // Listens to changes to route
    });
    1. Navigate to a route
    router.set("/path/to/route"); // Route should always start with a "https://github.com/"

    Hash router

    Silkrouter also adds hash routing capability. Hash routes are useful when back-end doesn’t have a way to support page paths. Hash routing can be enabled via hashRouting flag.

    const router = new Router({
      hashRouting: true,
    });

    Please note that silkrouter replaces the current path with a hash path by default. To disable this behaviour you need to preserve the current path.

    const router = new Router({
      hashRouting: true,
      preservePath: true,
    });

    Path preservation only works for hash routing.

    Disable initialization

    Silkrouter automatically calls the handler as soon as it is attached. This behaviour allow consumers to mount components on page load. To attach the listeners silently, you can disable this behaviour.

    const router = new Router({
      init: false,
    });

    Please note that disabling initialization doesn’t effect the routing functionality. Route changes are still caught by the handlers.

    Operators

    From version 5 onwards silkrouter does not ship its own operators. You can create your own operators as needed, or use the ones built by the awesome JavaScript community.

    const router = new Router();
    
    router.pipe(myOperator()).subscribe((event) => {
      // ...
    });

    myOperator.js

    export function myOperator() {
      return (observable) =>
        new Observable((subscriber) => {
          const currSubscription = observable.subscribe({
            next(value) {
              // ...
              subscriber.next(/* updated value */);
            },
            error: subscriber.error,
            complete: subscriber.complete,
          });
          // ...
          return () => {
            return currSubscription.unsubscribe();
          };
        });
    }

    Contribution

    We invite you all to contribute to silkrouter and make it better. Please feel free to open discussions, fork this repository and raise PRs.

    Visit original content creator repository https://github.com/scssyworks/silkrouter
  • fBomb

    fBomb

    See where in the world the fBomb was dropped

    Installation


    Fairly basic to set up your own instance of this application.

    To set up your own version of this app, all you have to do is clone this repo:

    $ git clone https://github.com/mgingras/fBomb.git && cd fBomb && npm install
    

    Configuration


    For this applicaiton you need to get the API keys for Twitter https://dev.twitter.com and Google Maps. These are then inserted into config.json . You can also specify the name you want your applciation to have in this file.

    Below is the current config.json file. replace the values surrounded by brackets (‘[‘) with your API keys and configurations.

    {
      "consumer_key":"[CONSUMER_KEY]",
      "consumer_secret":"[CONSUMER_SECRET]",
      "oauth_token":"[OATH_TOKEN]",
      "oauth_token_secret":"[OAUTH_TOKEN_SECRET]",
      "gmaps":"[GMAPS_API_KEY]",
      "app_name":"[APP_NAME]",
      "track": "[WORDS_TO_TRACK]"
    }

    You can then run the application with the following:

    $ coffee coffeeApp.coffee
    

    If you have configured it correctly you should be able to browse to localhost:3000 and you see some bombs drop!

    Customization


    Tracking

    To change what is being tracked by the application, replace “[WORDS_TO_TRACK]” in config.json with a comma seperated list of words to track. (e.g. “fuck,fucks,fucking”).

    Images

    Markers are customizable by replacing ‘fbomb.gif’ and ‘signPost.png’ located in ‘./public/img/’

    ‘fbomb.gif’ is the initail indicator.
    ‘signpost.png’ is the marker that drops after the gif animation and stays on the map.

    Deployment


    If you want to deploy this app, I suggest Heroku, they have lots of docs to help you out:
    node.js: https://devcenter.heroku.com/articles/getting-started-with-nodejs
    websockets: https://devcenter.heroku.com/articles/node-websockets

    Contact


    Let me know if you have any questions!

    Martin
    martin@mgingras.com

    Visit original content creator repository
    https://github.com/mgingras/fBomb

  • trybe-futebol-clube

    Boas-vindas ao repositório do Trybe Futebol Clube!


    Objetivo

    O Trybe Futebol Clube é um site informativo que fornece informações sobre partidas e classificações de futebol. Durante o desenvolvimento, foi criada uma API que se comunica com o front-end, permitindo adicionar uma partida após a validação de um token.

    O que foi desenvolvido?

    Trybe Futebol Clube é um projeto que fornece informações sobre um campeonato de futebol. Ele inclui um front-end pré-disponibilizado que permite realizar login e gerar um token.

    A API é capaz de acessar o banco de dados e retornar informações sobre todos os times, ou somente um time específico por meio do seu ID. Além disso, ela retorna informações sobre todas as partidas ou apenas as partidas em andamento ou finalizadas. A API possui endpoints para criar partidas em andamento, atualizar o placar e o status de progresso, e filtrar a classificação geral do campeonato, bem como a classificação dos times jogando em casa ou como visitantes.

    Para criar a aplicação, foi utilizado o docker para criar três containers, sendo dois para o ambiente preparado para o node.js (um para o front-end e outro para o back-end) e um terceiro para o banco de dados postgreSQL. A aplicação foi escrita com o typescript e o express.js para gerenciar rotas, tratar requisições HTTP e definir middlewares. O JWT foi usado para autenticar o token durante as requisições, e a biblioteca sequelize foi utilizada como ORM para abstrair operações do postgreSQL.

    Para a implementação de testes de integração, foram utilizadas as bibliotecas mocha, chai e sinon, com uma cobertura de testes de aproximadamente 90% da aplicação back-end.

    Linguagens e ferramentas

    • Docker
    • Node.js
    • Typescript
    • Express.js
    • JWT
    • PostgreSQL
    • Sequelize
    • Mocha
    • Chai
    • Sinon

    Instalação e execução com docker

    1 – Clone o repositório:

    git clone git@github.com:h3zord/trybe-futebol-clube.git
    

    2 – Entre no repositório:

    cd trybe-futebol-clube/app
    

    3 – Inicie os containers:

    docker compose up -d --build
    

    O container app_frontend vai executar o node na porta 3000, o container app_backend na porta 3001 e o banco de dados na porta 5432.
    http://localhost:3000/
    http://localhost:3001/


    Endpoints

    – Login

    Método post:

    • /login ➜ Realiza o login com email e senha e em seguida gera um token.

    Método get:

    • /login/validate ➜ Verifica qual é o tipo de usuário.

    – Team

    Método get:

    • /teams ➜ Lista todos os times.
    • /teams/:id ➜ Busca uma time pelo seu ID.

    – Match

    Método post:

    • /matches ➜ Cadastrar uma nova partida.

    Método get:

    • /matches ➜ Lista todas as partidas, apenas as finalizadas ou que estão em andamento.

    Método patch:

    • /matches/:id/finish ➜ Atualiza o status de uma partida em andamento para finalizada.
    • /matches/:id ➜ Atualiza o placar da partida.

    – LeaderBoard

    Método get:

    • /leaderboard ➜ Lista a classificação geral do campeonato.
    • /leaderboard/home ➜ Lista a classificação dos times como mandante no campeonato.
    • /leaderboard/away ➜ Lista a classificação dos times como visitante no campeonato.

    Execução dos testes

    1 – Entre no container do node back-end:

    docker exec -it app_backend sh
    

    2 – Rode o script:

    npm run test:coverage
    

    Cobertura de testes



    Visit original content creator repository https://github.com/h3zord/trybe-futebol-clube
  • next-htmx

    Next.js + HTMX

    This is a barebones template for using HTMX with Next.js.

    It also includes Tailwind, but does so via a static css file, so it’s easy to remove or upgrade to “proper” post-css if you want.

    Why did you make this?

    1. I’m riding the hype train and learning HTMX
    2. I like JSX more than HTML templating in other languages
    3. I like Vercel as a host for quick proof-of-concept projects

    Those are bad reasons

    Good

    How to use

    Serve full pages as normal. Instead of using React hooks, use HTMX attributes and create api endpoints to return the corresponding HTML. Check out ~/pages/api/clicked.tsx for an example of what that looks like.

    How it works

    HTMX is included as a static script via a custom Next _document file. Full pages are rendered server-side via Next. As long as you avoid stateful/client components, Next should strip React from the client side entirely. This still allows you to do things like async database calls using server components.

    When HTMX requests an HTML snippet from an api endpoint, we use Next’s pages router to statically render some JSX and return that HTML without returning a full document. That means that even if you do happen to mess up and try to use client components, it won’t be possible to hydrate them on the client.

    Visit original content creator repository
    https://github.com/ViableSystemModel/next-htmx

  • file-upload-engine

    Installation Instructions:

    cd file-upload-engine
    bundle install
    rails db:setup
    rails s

    Optional:

    You need to install imagemagick if it’s not already on your system (which is used to resize any uploaded images)
    brew install imagemagick

    Logging in on local development:

    As there is not an smtp server set up for local development, when you first sign up it won’t send an email with the confirmation link. Instead, you can find the link in the server logs. Simply copy and paste the link into a browser and this will confirm your email address.

    Updating credentials:

    As you don’t want to be commiting access keys, logins, or generally anything you would want to remain a secret, you should be modifying the encrypted credentials.yml.enc file. To modify this file, type the below:
    EDITOR="subl --wait" rails credentials:edit

    When you are finished, simply close the file and it will be encrypted again.

    Postgres troubleshooting:

    If you are on a mac, installing postgres should be relatively easy. If postgres does not successfully install through the bundle command, you can use:
    brew install postgres

    On linux, the process is a bit more involved:
    sudo apt-get install postgresql

    Stopping the server on linux:
    sudo /etc/init.d/postgresql stop

    Starting the server on linux:
    pg_ctl -D /home/linuxbrew/.linuxbrew/var/postgres start

    Visit original content creator repository
    https://github.com/programthis/file-upload-engine

  • 1lab

    Discord Build 1Lab

    A formalised, cross-linked reference resource for mathematics done in Homotopy Type Theory. Unlike the HoTT book, the 1lab is not a “linear” resource: Concepts are presented as a directed graph, with links indicating dependencies.

    Building

    Building the 1Lab is a rather complicated task, which has led to a lot of homebrew infrastructure being developed for it. We build against a specific build of Agda (see the rev field in support/nix/dep/Agda/github.json), and there are also quite a few external dependencies (e.g. pdftocairo, katex). The recommended way of building the 1Lab is using Nix.

    As a quick point of reference, nix-build will type-check and compile the entire thing, and copy the necessary assets (TeX Gyre Pagella and KaTeX’s CSS and fonts) to the right locations. The result will be linked as ./result, which can then be used to serve a website:

    $ nix-build
    $ python -m http.server --directory result

    Note that using Nix to build the website takes around ~20-30 minutes, since it will type-check the entire codebase from scratch every time. For interactive development, nix-shell will give you a shell with everything you need to hack on the 1Lab, including Agda and the pre-built Shakefile as 1lab-shake:

    $ 1lab-shake all -j

    Since nix-shell will load the derivation steps as environment variables, you can use something like this to copy the static assets into place:

    $ eval "${installPhase}"
    $ python -m http.server --directory _build/site

    To hack on a file continuously, you can use “watch mode”, which will attempt to only check and build the changed file.

    $ 1lab-shake all -w
    

    Additionally, since the validity of the Agda code is generally upheld by agda-mode, you can use --skip-agda to only build the prose. Note that this will disable checking the integrity of link targets, the translation of `ref`{.Agda} spans, and the code blocks will be right ugly.

    Our build tools are routinely built for x86_64-linux and uploaded to Cachix. If you have the Cachix CLI installed, simply run cachix use 1lab. Otherwise, add the following to your Nix configuration:

    substituters = https://1lab.cachix.org
    trusted-public-keys = 1lab.cachix.org-1:eYjd9F9RfibulS4OSFBYeaTMxWojPYLyMqgJHDvG1fs=
    

    Directly

    If you’re feeling brave, you can try to replicate one of the build environments above. You will need:

    • The cabal-install package manager. Using stack is no longer supported.

    • A working LaTeX installation (TeXLive, etc) with the packages tikz-cd (depends on pgf), mathpazo, xcolor, preview, and standalone (depends on varwidth and xkeyval);

    • Poppler (for pdftocairo);

    • libsass (for sassc);

    • Node + required Node modules. Run npm ci to install those.

    You can then use cabal-install to build and run our specific version of Agda and our Shakefile:

    $ cabal install Agda -foptimise-heavily
    # This will take quite a while!
    
    $ cabal v2-run shake -- -j --skip-agda
    # the double dash separates cabal-install's arguments from our
    # shakefile's.
    Visit original content creator repository https://github.com/the1lab/1lab
  • pyspark-openmrs-etl

    pyspark-openmrs-etl

    alt text

    • The motivation of this project is to provide ability of processing data in real-time from various sources like openmrs, eid, e.t.c

    Requirements

    Make sure you have the latest docker and docker compose

    1. Install Docker.
    2. Install Docker-compose.
    3. Make sure you have Spark 2.3.0 running
    4. Clone this repository

    Getting started

    You will only have to run only 3 commands to get the entire cluster running. Open up your terminal and run these commands:

    # this will install  5 containers (mysql, kafka, connect (dbz), openmrs, zookeeper, portainer and cAdvisor)
    # cd /openmrs-etl
    export DEBEZIUM_VERSION=0.8
    docker-compose -f docker-compose.yaml up
    
    # Start MySQL connector
    curl -i -X POST -H "Accept:application/json" -H  "Content-Type:application/json" http://localhost:8083/connectors/ -d @register-mysql.json
    
    
    # build and run spark cluster. (realtime streaming and processing)
    # https://www.youtube.com/watch?v=MNPI925PFD0
    sbt package
    sbt run 
     
    

    If everything runs as expected, expect to see all these containers running:

    alt text

    You can access this here: http://localhost:9000

    Openmrs

    Openmrs Application will be eventually accessible on http://localhost:8080/openmrs. Credentials on shipped demo data:

    • Username: admin
    • Password: Admin123

    Spark Jobs Monitor & Visualization

    http://localhost:4040 alt text

    Docker Container Manager: Portainer

    http://localhost:9000

    MySQL client

    docker-compose -f docker-compose.yaml exec mysql bash -c 'mysql -u $MYSQL_USER -p$MYSQL_PASSWORD inventory'
    

    Schema Changes Topic

    docker-compose -f docker-compose.yaml exec kafka /kafka/bin/kafka-console-consumer.sh     --bootstrap-server kafka:9092     --from-beginning     --property print.key=true     --topic schema-changes.openmrs
    

    How to Verify MySQL connector (Debezium)

    curl -H "Accept:application/json" localhost:8083/connectors/
    

    Shut down the cluster

    docker-compose -f docker-compose.yaml down
    

    cAdvisor: Docker & System Performance

    http://localhost:9090

    Debezium Topics

    alt text

    Consume messages from a Debezium topic [obs,encounter,person, e.t.c]

    • All you have to do is change the topic to –topic dbserver1.openmrs.
       docker-compose -f docker-compose.yaml exec kafka /kafka/bin/kafka-console-consumer.sh \
        --bootstrap-server kafka:9092 \
        --from-beginning \
        --property print.key=true \
        --topic dbserver1.openmrs.obs

    Cluster Design Architecture

    • This section attempts to explain how the clusters work by breaking everything down

    • Everything here has been dockerized so you don’t need to do these steps

    Directory Structure

    project
    │   README.md 
    │   kafka.md  
    │   debezium.md
    │   spark.md
    │   docker-compose.yaml
    │   build.sbt
    │
    └───src
    │   │   file011.txt
    │   │   file012.txt
    │   │
    │   └───subfolder1
    │       │   file111.txt
    │       │   file112.txt
    │       │   ...
    │   
    └───project
        │   file021.txt
        │   file022.txt
    

    KAFKA CLUSTER DESIGN CONCERN

    alt text

    1. How many brokers will we have? this will determine how scalable and fast the cluster will be.

    2. How many producers & consumers will we need inorder to ingest and process encounter, obs,orders,person e.t.c?

    3. How many partitions will we have per topic?

      • we will definitely need to come with an intelligent way of calculating number of partition per topic.
      • keeping in mind that this is correlated with “fault tolerance” and speed of access
    4. Will we allow automatic partition assignment or go manual?

      • going manual is crucial for parallel processing
    5. will we need consumer group in this design

      • keep in mind that the obs producer will have so many transactions in parallel
    6. What Replication factor (RF)? RF is number of copies of each partition stored on different brokers

      • Keeping in mind replication factor is used to achieve fault tolerance
      • it also depends on number Brokers we will have.
      • should be predetermined and set during topic creation
    7. Kafka doesn’t retain data forever that’s not it’s work. There are 2 properties log.retention.ms and log.retention.bytes which determines retention. default is 7 days

      • log.retention.ms – retention by time (default is 7 day) data will be deleted after 7 days
      • log.retention.bytes – retention by size (size is applicable to partition)
    8. How many times should we set the producer to retry after getting an error (default is 0)

    9. order of delivery in asynchronous send is not guaranteed? could this be a potential threat

    10. Do we need to use consumer group (this can scale up speed of processing)

      • we will have to consider designing for rebalancing using offset
      • why do we even need it ?
        • allows you to parallel process a topic
        • automatically manages partition assignment
        • detects entry/exit/failure of a consumer and perform partition rebalancing
    11. What about autocommit? should we override it to false

      • this will allow us to ensure that we don’t lose data from the pipline incase our permanent storage service goes down just intime after data processing
    12. Schema evolution design strategy

      • so that our producers and consumers can evolve – otherwise we will have to create duplicate producers and consume in case of changes in the
    Visit original content creator repository https://github.com/fatmali/pyspark-openmrs-etl
  • sense-embedding

    Code style: black

    Sense Embedding

    Datasets

    The datasets used can be found here:

    Preprocessing

    Before train the model, We need to preprocess the raw dataset. We take EuroSense as example. EuroSense consist of a a single large XML file (21GB uncompressed for the high precision version), even though it is a multilingual corpus, we will use only the English sentences. The file can be filtered with the filter_eurosense() function inside preprocessing/eurosense.py file.

    The EuroSense files contains sentences, with already tokenized text. Each annotation marks the sense for a word in text identified by the anchor attribute. Each annotation provides the lemma of the word it is tagging and the synset id.

    <sentence id="0">
      <text lang="en">It is vital to minimise the grey areas and  [...] </text>
      <annotations>
        <annotation lang="en" type="NASARI" anchor="areas" lemma="area"
            coherenceScore="0.2247" nasariScore="0.9829">bn:00005513n</annotation>
        ...
      </annotations>
    </sentence>
    

    It is convenient to preprocess the XML in a single text file, replacing all the anchors with the corresponding lemma_synset. A line in the parsed dataset, from the example above, is

    It is vital to minimise the grey area_bn:00005513n and [...]
    

    We can run the parse.py script to obtain this parsed dataset.

    python code/parse.py es -i es_raw.xml -o parsed_es.txt 

    Train

    Gensim implementation of Word2Vec and FastText are used to train the sense vectors. The train script is implemented in the train.py file. To start the training phase, run

    python code/train.py parsed_es.txt -o sensembed.vec

    For a complete list of options run python code/train.py -h

    usage: train.py [-h] -o OUTPUT [-m MODEL] [--model_path SAVE_MODEL]
                    [--min-count MIN_COUNT] [--iter ITER] [--size SIZE]
                    input [input ...]
    
    positional arguments:
      input                 paths to the corpora
    
    optional arguments:
      -h, --help            show this help message and exit
      -o OUTPUT             path where to save the embeddings file
      -m MODEL              model implementation, w2v=Word2Vec, ft=FastText
      --model_path SAVE_MODEL
                            path where to save the model file
      --min-count MIN_COUNT
                            ignores all words with total frequency lower than this
      --iter ITER           number of iterations over the corpus
      --size SIZE           dimensionality of the feature vectors

    The output should be in the Word2Vec format, where the vocab is composed of lemma_synset1 and the corresponding vector.

    number_of_senses embedding_dimension
    lemma1_synset1 dim1 dim2 dim3 ... dimn
    lemma2_synset2 dim1 dim2 dim3 ... dimn
    

    Evaluation

    The evaluation consists of measuring the similarity or relatedness of pairs of words. Word similarity datasets (WordSimilarity-353) consists of a list of pairs of words. For each pair we have a score of similarity established by human annotators

    Word1     Word2     Gold
    --------  --------  -----
    tiger     cat       7.35
    book      paper     7.46
    computer  keyboard  7.62
    

    The scoring algorithm inside score.py computes the cosine similarity between all the senses for each pair of word in the word similarity datasets.

    for each w_1, w_2 in ws353:
       S_1 <- all sense embeddings associated with w_1
       S_2 <- all sense embeddings associated with w_2
       score <- -1.0
       For each pair s_1 in S_1 and s_2 in S_2 do:
           score = max(score, cos(s_1, s_2))
       return score
    

    where cos(s_1, s_2) is the cosine similarity between vector s_1 and s_2.

    Now we check our scores against the gold ones in the dataset. To do so, we calculate the Spearman correlation between gold similarity scores and cosine similarity scores.

    Word1     Word2     Gold   Cosine
    --------  --------  -----  ------
    tiger     cat       7.35   0.452
    book      paper     7.46   0.784
    computer  keyboard  7.62   0.643
    
    Spearman([7.35, 7.46, 7.62], [0.452, 0.784, 0.643]) = 0.5
    

    The score can be computed by running the following command

    python code/score.py sensembed.vec resources/ws353.tab
    Visit original content creator repository https://github.com/Riccorl/sense-embedding