Commit c6d99177 authored by Alexander Rosenberg's avatar Alexander Rosenberg

Merge branch 'master' into…

Merge branch 'master' into 5-add-a-question-and-answer-to-the-faq-regarding-root-access-on-the-cluster
parents 3cf6e1e5 599902fd
......@@ -17,12 +17,18 @@ code examples are provided under the [MIT](https://opensource.org/licenses/MIT)
### Install build tools and dependencies.
<details>
<summary>Liquid 4.0.3</summary>
> [!WARNING]
> Due to Liquid not being updated to work with Ruby 3.2.x, **make sure you have Ruby 3.1.x or older installed**.
> If you have the latest dependencies installed, the following does not apply anymore.
>
> The original `jekyll-rtd-theme` 2.0.10 required `github-pages` 209, which effectively capped the version of Liquid to 4.0.3.
> Due to Liquid 4.0.3 and older not being updated to work with Ruby 3.2.x, Ruby 3.1.x or older was required for Liquid 4.0.3.
> https://talk.jekyllrb.com/t/liquid-4-0-3-tainted/7946/18
>
> #### With Cygwin
> As of this writing (8/8/2024), Cygwin provides Ruby versions 2.6.4-1 and 3.2.2-2. Make sure to install the former. Additionally, the version of bundler supplied with Ruby 2.6 is too old and the version of RubyGems is too new. *After installing the following dependencies*, you must then install the correct versions of RubyGems and bundler manually:
> As of 8/8/2024, Cygwin provided Ruby versions 2.6.4-1 and 3.2.2-2. You would need to make sure to install the former. As the version of bundler supplied with Ruby 2.6 is too old and the version of RubyGems is too new, the correct versions of RubyGems and bundler would need to be installed manually after installing all the other dependencies:
> ```
> gem update --system 3.2.3
> gem install bundler -v 2.1.4
......@@ -31,6 +37,8 @@ code examples are provided under the [MIT](https://opensource.org/licenses/MIT)
> bundler -v
> ```
</details>
To allow building of native extensions, install `ruby-devel`, `gcc`, and `make`.
Install `libxml2`, `libxml2-devel`, `libxslt`, `libxslt-devel`, `libiconv`,
......@@ -58,6 +66,10 @@ want to try running `bundle update` or removing `Gemfile.lock` and then running
git clone https://github.com/starhpc/docs.git star-docs
cd star-docs
gem install bundler
bundle config set --local path ~/.bundler # Optionally specify where to install gems (SO Q&A #8257833).
# Otherwise, bundler may attempt to install gems system-wide,
# e.g. /usr/share/gems, depending on your GEM_HOME
# (see SO Q&A #11635042 and #3408868).
bundle install
bundle exec jekyll serve
```
......
......@@ -16,4 +16,4 @@ exclude:
plugins:
- jemoji
- jekyll-avatar
- jekyll-mentions
- jekyll-mentions
\ No newline at end of file
......@@ -63,14 +63,37 @@ Yes. Please see `/software/python_r_perl`.
### How can I check my disk quota and disk usage?
repquota prints a summary of the disc usage and quotas for the specified file systems.
To check the disk quota of your home directory ( /home/username ), you can use the repquota command which prints a summary of the disc usage and quotas for the specified file systems.
$ /usr/sbin/repquota -a -s
$ Block limits File limits
$ User used soft hard grace used soft hard grace
$ cchave6 -- 116M 1024M 1280M 1922 0 0
If you want to see the quota on the home directory where the file system is ext4, the quota information is stored in files named aquota.user and aquota.group at the root of filesystem.
Here,
Soft Limit -> This is a warning threshold. A user can exceed this limit temporarily, but they must reduce usage back under this limit within a "grace period."
Hard Limit -> This is the absolute maximum disk space or number of files a user can use. The user cannot exceed this limit at all.
Grace Period -> The amount of time a user is allowed to exceed the soft limit before they are required to get back under it. If this period expires, the soft limit becomes enforced like a hard limit.
File limits (inodes) -> These limit the number of files a user can create, regardless of their size.
To check the quota of the main project storage (parallel file system - /fs1/proj/<project>), you can use this command:
$ mmlsquota -j <fileset_name> <filesystem_name>
The -j option specifies that you are querying a fileset. Filesets in GPFS are similar to directories that can have independent quota limits.
fileset_name -> This is the name of the fileset whose quota you want to check.
filesystem_name -> The name of the GPFS filesystem in which the fileset resides.
example: mmlsquota -j project_fileset gpfs1
### How many CPU hours have I spent?
......
......@@ -9,4 +9,4 @@ Please see the [Quick Start Guide]({{site.baseurl}}{% link quickstart/quickstart
## Getting help
First, please read [how to write good support requests]({{site.baseurl}}{% link help/writing-support-requests.md %}). Then shoot us an email.
First, please read [how to write good support requests]({{site.baseurl}}{% link help/writing-support-requests.md %}). Then [contact us]({{site.baseurl}}{% link help/contact.md %}).
......@@ -8,48 +8,115 @@ sort: 1
The Star HPC Cluster is a computing facility designed for a variety of research and computational tasks. It combines advanced computing **nodes** and a high-speed **storage system** with a suite of **software applications**.
SLURM (Simple Linux Utility for Resource Management) is our chosen job scheduler and queueing system that efficiently manages resource allocation, ensuring everyone gets the right amount of resources at the right time.
SLURM (Simple Linux Utility for Resource Management) is our chosen job scheduler and queueing system that efficiently manages resource allocation, ensuring everyone gets the right amount of resources at the right time.
Apptainer (formerly Singularity) is also a major application on the cluster. Apptainer is a containerization platform similar to Docker with the major difference that it runs under user privileges instead of `root`. This platform is enhanced by NGC (NVIDIA GPU Cloud) which provides access to a wide array of pre-built, GPU-optimized software containers for diverse applications. This integration saves all users a lot of time as they don’t need to set up the software applications from scratch and can just pull and use the NGC images with Apptainer.
The cluster also supports various software applications tailored to different needs: Python and R for data analysis, MATLAB for technical computing, Jupyter for interactive projects, and OpenMPI for parallel computing. Anaconda broadens these capabilities with packages for scientific computing, while NetCDF manages large datasets. For big data tasks, Hadoop/Spark offers powerful processing tools.
## Hardware
### Login Node
### Compute Nodes
* Two Apollo 6500 Gen10+ HPE nodes, *each* containing 8 NVIDIA A100 SXM GPUs.
* One HPE ProLiant DL385 Gen10+ v2, containing 2 A30 SXM NVIDIA GPUs.
- Two Apollo 6500 Gen10+ HPE nodes, _each_ containing 8 NVIDIA A100 SXM GPUs.
- One HPE ProLiant DL385 Gen10+ v2, containing 2 A30 SXM NVIDIA GPUs.
- Two XL675d Gen10+ servers (Apollo 6500 Gen10+ chassis), _each_ containing 8 NVIDIA A100 SXM4 GPUs.
- One HPE DL385 Gen10+ v2 with 2 A30 PCIe GPUs.
- Two HPE DL380a Gen11 servers, _each_ containing 2 NVIDIA H100 80GB GPUs.
- Two Cray XD665 nodes, _each_ containing 4 NVIDIA HGX H100 80GB GPUs.
- One Cray XD670 node, containing 8 NVIDIA HGX H100 80GB GPUs.
#### HPE Apollo 6500 Gen10
| Attribute\Node Name | gpu1 | gpu2 |
|------------------------|----------------------------------|----------------------------------|
| Model Name | HPE ProLiant XL675d Gen10 Plus; Apollo 6500 Gen10 Plus Chassis | HPE ProLiant XL675d Gen10 Plus; Apollo 6500 Gen10 Plus Chassis |
| Sockets | 2 | 2 |
| Cores per Socket | 32 | 32 |
| Threads per Core | 2 | 2 |
| Memory | 1024 GiB Total Memory (16 x 64GiB DIMM DDR4) | 1024 GiB Total Memory (16 x 64GiB DIMM DDR4) |
| GPU | 8 SXM NVIDIA A100s | 8 SXM NVIDIA A100s |
| Local Storage (Scratch space) | 407GB | 407GB |
| Attribute\Node Name | gpu1 | gpu2 |
| ----------------------------- | -------------------------------------------------------------- | -------------------------------------------------------------- |
| Model Name | HPE ProLiant XL675d Gen10 Plus; Apollo 6500 Gen10 Plus Chassis | HPE ProLiant XL675d Gen10 Plus; Apollo 6500 Gen10 Plus Chassis |
| Sockets | 2 | 2 |
| Cores per Socket | 32 | 32 |
| Threads per Core | 2 | 2 |
| Memory | 1024 GiB Total Memory (16 x 64GiB DIMM DDR4) | 1024 GiB Total Memory (16 x 64GiB DIMM DDR4) |
| GPU | 8 SXM NVIDIA A100s | 8 SXM NVIDIA A100s |
| Local Storage (Scratch space) | 407GB | 407GB |
#### HPE DL385 Gen10
| Attribute\Node Name | cn01 |
|------------------------|-------------------------------------------|
| Model Name | HPE ProLiant DL385 Gen10 Plus v2 |
| Sockets | 2 |
| Cores per Socket | 32 |
| Threads per Core | 2 |
| Memory | 256GiB Total Memory (16 x 16GiB DIMM DDR4)|
| GPU | 2 SXM NVIDIA A30s |
| Local Storage (Scratch Space) | 854G |
| Attribute\Node Name | cn01 |
| ----------------------------- | ------------------------------------------ |
| Model Name | HPE ProLiant DL385 Gen10 Plus v2 |
| Sockets | 2 |
| Cores per Socket | 32 |
| Threads per Core | 2 |
| Memory | 256GiB Total Memory (16 x 16GiB DIMM DDR4) |
| GPU | 2 SXM NVIDIA A30s |
| Local Storage (Scratch Space) | 854G |
#### XL675d Gen10+ (Apollo 6500 Chassis)
| Attribute\Node Name | gpu4 | gpu5 |
| ----------------------------- | -------------------------------------- | -------------------------------------- |
| Model Name | HPE ProLiant XL675d Gen10 Plus Chassis | HPE ProLiant XL675d Gen10 Plus Chassis |
| Sockets | 2 (AMD EPYC 7513 @ 2.60 GHz) | 2 (AMD EPYC 7513 @ 2.60 GHz) |
| Cores per Socket | 64 Physical Cores | 64 Physical Cores |
| Threads per Core | 2 (128 Logical Cores) | 2 (128 Logical Cores) |
| Memory | 1024 GiB DDR4 3200 RAM | 1024 GiB DDR4 3200 RAM |
| GPU | 8 NVIDIA A100 80GB SXM4 GPUs | 8 NVIDIA A100 80GB SXM4 GPUs |
| Local Storage (Scratch Space) | 2x 480GB SSD | 2x 480GB SSD |
#### HPE DL385 Gen10+ v2
| Attribute\Node Name | cn02 |
| ----------------------------- | -------------------------------- |
| Model Name | HPE ProLiant DL385 Gen10 Plus v2 |
| Sockets | 2 (AMD EPYC 7513 @ 2.60 GHz) |
| Cores per Socket | 64 Physical Cores |
| Threads per Core | 2 (128 Logical Cores) |
| Memory | 256GiB DDR4 RAM |
| GPU | 2 NVIDIA A30 24GB HBM2 PCIe GPUs |
| Local Storage (Scratch Space) | 854G |
#### HPE DL380a Gen11
| Attribute\Node Name | gpu6 | gpu7 |
| ----------------------------- | -------------------------------------------- | -------------------------------------------- |
| Model Name | HPE DL380a Gen11 | HPE DL380a Gen11 |
| Sockets | 2 (Intel Xeon-P 8462Y+ @ 2.8GHz) | 2 (Intel Xeon-P 8462Y+ @ 2.8GHz) |
| Cores per Socket | 64 | 64 |
| Threads per Core | 2 (128 Logical Cores) | 2 (128 Logical Cores) |
| Memory | 512 GiB DDR5 RAM | 512 GiB DDR5 RAM |
| GPU | 2 NVIDIA H100 80GB GPUs (NVAIE subscription) | 2 NVIDIA H100 80GB GPUs (NVAIE subscription) |
| Network | 4-port GbE, 1-port HDR200 InfiniBand | 4-port GbE, 1-port HDR200 InfiniBand |
| Local Storage (Scratch Space) | 1TB SSD | 1TB SSD |
#### Cray XD665 Nodes
| Attribute\Node Name | cray01 | cray02 |
| ----------------------------- | -------------------------------------- | -------------------------------------- |
| Model Name | Cray XD665 | Cray XD665 |
| Sockets | 2 (AMD EPYC Genoa 9334 @ 2.7GHz) | 2 (AMD EPYC Genoa 9334 @ 2.7GHz) |
| Cores per Socket | 64 | 64 |
| Threads per Core | 2 (128 Logical Cores) | 2 (128 Logical Cores) |
| Memory | 768 GiB DDR5 RAM | 768 GiB DDR5 RAM |
| GPU | 4 NVIDIA HGX H100 80GB SXM GPUs | 4 NVIDIA HGX H100 80GB SXM GPUs |
| Network | 2-port 10GbE, 1-port HDR200 InfiniBand | 2-port 10GbE, 1-port HDR200 InfiniBand |
| Local Storage (Scratch Space) | 1TB SSD | 1TB SSD |
#### Cray XD670 Node
| Attribute\Node Name | cray03 |
| ----------------------------- | -------------------------------------- |
| Model Name | Cray XD670 |
| Sockets | 2 (Intel Xeon-P 8462Y+ @ 2.8GHz) |
| Cores per Socket | 64 Physical Cores |
| Threads per Core | 2 (128 Logical Cores) |
| Memory | 2048 GiB DDR5 RAM |
| GPU | 8 NVIDIA HGX H100 80GB SXM GPUs |
| Network | 2-port 10GbE, 1-port HDR200 InfiniBand |
| Local Storage (Scratch Space) | 2TB SSD |
### Storage System
Our storage system contains of four HPE PFSS nodes, collectively offering a total of 63TB of storage. You can think of these four nodes as one unified 63TB storage unit as it is a **Parallel File System Storage** component. These nodes work in parallel and are all mounted under **one** mount point on the gpu nodes only (`/fs1`).
## Our vision
......@@ -60,19 +127,17 @@ Making complex and time-intensive calculations simple and accessible.
Our heart is set on creating a community where our cluster is a symbol of collaboration and discovery. We are wishing to provide a supportive space where researchers and students can express their scientific ideas and explore unchanted areas. We aim to make the complicated world of computational research a shared path of growth, learning, and significant discoveries for the ones that are eager to learn.
## Operations Team
* Alexander Rosenberg
* Mani Tofigh
- Alexander Rosenberg
- Mani Tofigh
## The Board
* Edward H. Currie
* Daniel P. Miller
* Adam C. Durst
* Jason D. Williams
* Thomas G. Re
* Oren Segal
* John Ortega
- Edward H. Currie
- Daniel P. Miller
- Adam C. Durst
- Jason D. Williams
- Thomas G. Re
- Oren Segal
- John Ortega
......@@ -87,5 +87,5 @@ Project-specific directories may be created upon request for shared storage amon
To make proper use of the cluster, please familiarize yourself with the basics of using Slurm, fundamental HPC concepts, and the cluster's architecture.
You may be familiar with the `.bashrc`, `.bash_profile`, or `.cshrc` files for environment customization. To support different environments needed for different software packages, environment modules are used. Modules allow you to load and unload various software environments tailored to your computational tasks.
You may be familiar with the `.bashrc`, `.bash_profile`, or `.cshrc` files for environment customization. To support different environments needed for different software packages, [environment modules]({{site.baseurl}}{% link software/env-modules.md %}) are used. Modules allow you to load and unload various software environments tailored to your computational tasks.
# Virtual Environment Guide
Managing software dependencies and configurations can be challenging in an HPC environment. Users often need different versions of the same software or libraries, leading to conflicts and complex setups. [Environment modules]({{site.baseurl}}{% link software/env-modules.md %}) provide a solution by allowing users to dynamically modify their shell environment using simple commands. This simplifies the setup process, ensures that users have the correct software environment for their applications, and reduces conflicts and errors caused by incompatible software versions. Environment modules work on the same principle as virtual environments, i.e. the manipulation of environment variables. If an environment module is not available for a given version you need, you can instead create a virtual environment using the standard version manager tools provided with many common languages. Virtual environments allow for managing different versions of lanugages and dependencies independent of the system version or other virtual environments, so they are often used by developers to isolate dependencies for different projects.
This guide provides different methods for creating virtual environments and managing dependencies accross multiple languages including Python, R, Julia, Rust, C, C++, and others. This allows you to create projects in isolated environments and install dependencies without the use of root or sudo access.
## Python
### How to create and use a virtual environment in Python
There are several different ways you can create a virtual environment and install packages for Python. The `venv` module that comes with Python 3 is a lightweight tool that provides a standard way to create virtual environments. It is suitable for simple projects with minimal external dependencies. `virtualenv` is another tool that serves a similar purpose, but works with both Python 2 and 3. In either case, you would use `pip` to manage and install the Python packages in the virtual environments created with `venv` or `virtualenv`. Another way that is often recommended is using `conda`, the package manager and environment manager tool provided by Anaconda. Anaconda is better suited for projects that require data science libraries and have more complex dependencies. All these tools allow you to manage virtual environments for Python and packages within that environment. Below are directions for using venv to illustrate the concept. The steps may vary with other tools.
#### Setup environment
Create a new Python virtual environment using venv.
1. Ensure Python is installed:
```
python3 --version
```
If not installed, use your distribution's package manager to install Python 3.
2. Create a new directory for your project:
```
mkdir Project
```
3. Navigate to the new directory:
```
cd Project
```
4. Create a virtual environment:
```
python3 -m venv research1
```
5. Check if the environment was created:
```
ls
```
It should be listed as a directory in the results of running `ls`:
```
research1
```
6. Activate the environment for use:
```
source research1/bin/activate
```
Your current line should be prefixed with the environment name:
```
(research1) user@super-computer
```
#### Deactivate environment
Deactivate environment when finished:
```
deactivate
```
#### Installing packages
Install packages specific to your project.
1. Upgrade pip to the latest version:
```
pip install --upgrade pip
```
2. List installed packages:
```
pip list
```
3. Install new package:
```
pip install package_name
```
For a specific version: `pip install package_name==1.0.0`
4. List installed packages again to ensure the needed packages are installed:
```
pip list
```
#### Saving dependencies
Saving your project's dependencies allows you to easily recreate your environment on another machine or share it with others.
1. Save packages to a file:
```
pip freeze > requirements.txt
```
2. Ensure the file was successfully created with package info:
```
cat requirements.txt
```
3. Consider separating development and production dependencies:
```
pip freeze > requirements-dev.txt
```
#### Using dependencies
Install dependencies from a requirements file, when setting up a project on a new machine or collaborating with others.
1. Install dependencies from requirements file:
```
pip install -r requirements.txt
```
2. For development environments:
```
pip install -r requirements-dev.txt
```
3. List installed packages to ensure dependencies are installed:
```
pip list
```
#### Running Python scripts
Run a script within the virtual environment:
1. Ensure the virtual environment is activated
2. Run the script:
```
python your_script.py
```
#### Delete virtual environment
Deleting the virtual environment removes all installed packages and the environment itself.
1. Deactivate the environment if it's active:
```
deactivate
```
2. Delete directory with virtual environment:
```
rm -rf research1
```
#### Other tools
#### Virtualenv
Install virtualenv
```
python -m pip --user virtualenv
```
Create an environment
```
virtualenv <environment_name>
```
Activate the environment
```
<environment_name>/bin/activate
```
#### Conda
Create an environment with conda
```
conda create --name <environment_name> python=<version>
```
Activate the environment
```
conda activate <environment_name>
```
## R
### How to create and use a virtual environment in R
There are several ways to manage project environments and install packages in R. The renv package is a popular tool for creating project-specific libraries. It's suitable for most R projects and works well with version control systems. Another option is packrat, which was a precursor to renv and serves a similar purpose. Regardless, both options allow you to manage project environments and packages within R. Below are directions using renv to create and manage a virtual environment.
#### Setup environment
Create a new R project and initialize it with renv
1. Ensure R is installed:
```
R --version
```
2. Create directory for project:
```
mkdir Project
```
3. Navigate to directory:
```
cd Project
```
4. Create an R project:
```R
R
> usethis::create_project(".")
```
5. Install renv for virtual environments:
```R
> install.packages("renv")
```
6. Initialize renv:
```R
> renv::init()
```
This creates a project-specific library and .Rprofile
#### Activate an environment
Activating an environment in R loads the project-specific library, ensuring that your code uses the correct versions of packages for your project.
```R
> renv::activate()
```
#### Deactivate an environment
Deactivating an environment returns you to the global R library, which is useful when switching between projects.
```R
> renv::deactivate()
```
#### Installing Packages
Installing packages in an renv environment adds them to your project-specific library.
1. Install a package:
```R
> install.packages("package_name")
```
2. Record the new package in the lockfile:
```R
> renv::snapshot()
```
#### Using dependencies
Save the state of your environment for reuse.
1. Save environment state to lock file:
```R
> renv::snapshot()
```
2. Recreate another environment given the lock file:
```R
> renv::restore()
```
3. Update all packages in the project:
```R
> renv::update()
```
#### Running R scripts
Run an R script within the project environment:
1. Ensure you're in the project directory
2. Run the script:
```
Rscript your_script.R
```
#### Sharing environments
To share your project environment:
1. Include both renv.lock and .Rprofile in your version control system
2. Others can recreate your exact environment using:
```R
> renv::restore()
```
#### Delete virtual environment
Remove the renv directory and associated files. This deletes the environment and its packages.
1. Exit R if you're in an R session
2. Delete directory with virtual environment:
```
rm -rf renv
```
3. Remove .Rprofile and renv.lock files:
```
rm .Rprofile renv.lock
```
## Julia
### How to create and use a virtual environment in Julia
Julia's built-in package manager, Pkg, provides functionality similar to virtual environments in other languages. The primary method is using project environments, which are defined by Project.toml and Manifest.toml files. These environments allow you to have project-specific package versions and dependencies. To create and manage these environments, you use Julia's REPL in package mode (accessed by pressing ']')
#### Setup environment
Create a new project directory and activate it as a Julia environment.
1. Check Julia version:
```
julia -version
```
2. Create directory for project:
```
mkdir Project
```
3. Navigate to directory:
```
cd Project
```
4. Start Julia and enter package manager mode:
```
julia
julia> ]
```
5. Create a new environment and activate:
```
(@v1.10) pkg> activate .
```
This creates a new environment in the current directory.
6. Create Project.toml and Manifest.toml files:
```
(@v1.10) pkg> instantiate
```
#### Activate environment
Activate an existing environment:
```
julia> ]
(@v1.10) pkg> activate /path/to/your/project
```
#### Deactivate environment
Deactivating an environment in Julia returns you to the default (global) environment.
```
(@v1.10) pkg> activate
```
#### Install packages
Installing packages in a Julia environment adds them to your project-specific manifest, keeping your project dependencies isolated.
1. Install package:
```
(@v1.10) pkg> add Package
```
2. Press backspace to exit package manager
3. Ensure the new package is installed:
```julia
julia> using Package
```
#### Using dependencies
Install dependencies from your project files, ensuring consistent version usage.
1. Enter package manager:
```
julia> ]
```
2. Install dependencies from .toml files in the project directory:
```
(@v1.10) pkg> instantiate
```
3. Update all packages:
```
(@v1.10) pkg> update
```
#### Running Julia scripts
Run a Julia script from within the project environment:
1. Activate the environment in Julia REPL
2. Run the script:
```julia
julia> include("your_script.jl")
```
Or from the command line:
```
julia --project=. your_script.jl
```
#### Managing multiple environments
Julia allows you to have multiple environments for different purposes:
1. Create a test environment:
```
(@v1.10) pkg> activate --temp
```
2. Switch between environments:
```
(@v1.10) pkg> activate /path/to/environment
```
#### Delete virtual environment
Removing the project-specific files deletes environment and its package information.
Delete the Project.toml and Manifest.toml files from your project directory.
#### Sharing environments
To share your project environment:
1. Include both Project.toml and Manifest.toml in your version control system
2. Others can recreate your exact environment using:
```
(@v1.10) pkg> instantiate
```
## Node JS
### How to create and use a virtual environment in Node JS
Node.js doesn't have traditional virtual environments like Python or R, but it uses npm (Node Package Manager) or Yarn to manage dependencies on a per-project basis. Each project typically has its own package.json file that lists dependencies and scripts. The node_modules folder in each project acts similarly to a virtual environment, containing project-specific packages. The directions below help to initialize a Node.js project using npm.
#### Setup environment
Create a new directory and initialize it with npm. This creates a package.json file to manage your project's dependencies.
1. Check node version:
```
node -v
```
2. Create directory for project:
```
mkdir Project
```
3. Navigate to directory:
```
cd Project
```
4. Initialize the project:
```
npm init -y
```
5. Create a .nvmrc file that specifies node version:
```
node -v > .nvmrc
```
#### Install packages
Installing packages in a Node.js project adds them to your package.json file and the node_modules directory.
1. Install a package and save it to package.json:
```
npm install package --save
```
2. Install a development dependency:
```
npm install package --save-dev
```
#### Using dependencies
Use a specific Node.js version for your project and install or update all dependencies listed in your package.json file.
1. Use .nvmrc for node version:
```
nvm use
```
2. Install packages from package.json:
```
npm install
```
3. Update all packages:
```
npm update
```
#### Managing different Node.js versions
Node Version Manager (nvm) allows you to install and use different versions of Node.js for different projects.
1. Install a specific Node.js version:
```
nvm install 14.17.0
```
2. Switch to a specific Node.js version:
```
nvm use 14.17.0
```
#### Running your Node.js application
To run your Node.js application:
```
node your-app.js
```
Or, if you've defined a start script in your package.json:
```
npm start
```
#### Creating and using an .npmrc file
An .npmrc file can be used to configure npm behavior for your project:
1. Create an .npmrc file in your project root:
```
touch .npmrc
```
2. Add configuration to the file, for example:
```
save-exact=true
```
This will save exact versions of packages instead of using npm's default semantic versioning range operator.
#### Delete virtual environment
Remove the "virtual environment" by deleting the node_modules directory and optionally the package.json file. This removes all installed packages and dependency information.
1. Delete the node_modules directory:
```
rm -rf node_modules
```
2. Optionally, remove package.json and package-lock.json:
```
rm package.json package-lock.json
```
You can keep the `package.json` if you want to reinstall your dependencies later.
# C and C++ Development Guide Using Home Directory
## C
### How to use the home directory for development with C
C doesn't have a built-in package manager or virtual environment system like more modern languages. Instead, C developers typically manage their development environment and dependencies manually. For system-wide installations, package managers like apt, yum, or Homebrew are often used to install libraries and tools. For user-specific or project-specific setups, developers commonly create custom directory structures in their home directory or project folders to house libraries, header files, and binaries. Environment variables like PATH, LD_LIBRARY_PATH, and CPATH are used to tell the compiler and linker where to find these custom installations. Build tools like Make, CMake, or Autotools are used to manage the compilation process and handle dependencies. This offers fine-grained control over the development environment and appeals to C's nature. Below are directions to set up the C environment using various directory and variable configurations.
#### Setup environment
Create directories for binaries, libraries, and include files, and setting up environment variables to use these directories.
1. Verify installation:
```
gcc --version
make --version
```
2. Make directory for local installations:
```
mkdir -p ~/local/{bin,lib,include}
```
3. Add the following to your `~/.bashrc` or `~/.bash_profile`:
```
export PATH=$HOME/local/bin:$PATH
export LD_LIBRARY_PATH=$HOME/local/lib:$LD_LIBRARY_PATH
export CPATH=$HOME/local/include:$CPATH
export PKG_CONFIG_PATH=$HOME/local/lib/pkgconfig:$PKG_CONFIG_PATH
```
4. Reload shell configuration:
```
source ~/.bashrc # or source ~/.bash_profile
```
#### Install packages
Installing packages (libraries) for C development in your home directory by downloading source code, compiling it, and installing it in your local directories.
Install a library with example libcurl:
```
wget https://curl.se/download/curl-7.78.0.tar.gz
tar xzf curl-7.78.0.tar.gz
cd curl-7.78.0
./configure --prefix=$HOME/local
make
make install
```
#### Create project
Create directory structure, source files, and a Makefile for easy compilation.
1. Set up project structure:
```
mkdir my_c_project
cd my_c_project
mkdir src
```
2. Create file that uses the library:
```
cat << EOF > src/main.c
#include <stdio.h>
#include <curl/curl.h>
int main(void) {
CURL *curl = curl_easy_init();
if(curl) {
printf("CURL initialized successfully\n");
curl_easy_cleanup(curl);
} else {
printf("Failed to initialize CURL\n");
}
return 0;
}
EOF
```
3. Create a makefile for easy compilation:
```
cat << EOF > Makefile
CC = gcc
CFLAGS = -I$(HOME)/local/include
LDFLAGS = -L$(HOME)/local/lib
LIBS = -lcurl
main: src/main.c
$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LIBS) -o $@
.PHONY: clean
clean:
rm -f main
EOF
```
4. Compile using:
```
make
```
5. Clean using:
```
make clean
```
#### Using the project
To run your compiled C program:
```
./main
```
#### Updating libraries
To update a library, you can download the new version, compile, and install it similar to the initial installation process. For example, to update libcurl:
1. Download and extract the new version
2. Navigate to the extracted directory
3. Run the installation commands again:
```
./configure --prefix=$HOME/local
make
make install
```
#### Deleting the environment
In C, there's no virtual environment to delete. However, you can remove your local installations and project files:
1. Remove local installations:
```
rm -rf ~/local
```
2. Remove project directory:
```
rm -rf my_c_project
```
Remember to also remove or comment out the environment variable settings in your `~/.bashrc` or `~/.bash_profile` if you no longer need them.
## C++
### How to use the home directory for development with C++
Like C, C++ doesn't have a standardized built-in package manager or virtual environment system like some modern languages. Instead, C++ developers typically manage their development environment and dependencies manually or through third-party tools. For system-wide installations, package managers like apt, yum, or vcpkg are often used to install libraries and tools. For user-specific or project-specific setups, developers commonly create custom directory structures in their home directory or project folders to house libraries, header files, and binaries. Environment variables like PATH, LD_LIBRARY_PATH, and CPATH are used to tell the compiler and linker where to find these custom installations. Build systems like CMake, Make, or Ninja are widely used to manage the compilation process and handle dependencies. This offers fine-grained control over the development environment and appeals to the nature of C++. Below are directions to set up the C++ environment using various directory and variable configurations.
#### Setup environment
Creating directories for binaries, libraries, and include files, and create environment variables.
1. Verify installation:
```
g++ --version
make --version
```
2. Make directory for local installations:
```
mkdir -p ~/local/{bin,lib,include}
```
3. Add the following to your `~/.bashrc` or `~/.bash_profile`:
```
export PATH=$HOME/local/bin:$PATH
export LD_LIBRARY_PATH=$HOME/local/lib:$LD_LIBRARY_PATH
export CPATH=$HOME/local/include:$CPATH
export PKG_CONFIG_PATH=$HOME/local/lib/pkgconfig:$PKG_CONFIG_PATH
```
4. Reload shell configuration:
```
source ~/.bashrc # or source ~/.bash_profile
```
#### Install packages
Installing packages (libraries) for C++ development in your home directory by downloading source code, compiling it, and installing it in your local directories.
Install a library with example boost:
```
wget https://boostorg.jfrog.io/artifactory/main/release/1.76.0/source/boost_1_76_0.tar.gz
tar xzf boost_1_76_0.tar.gz
cd boost_1_76_0
./bootstrap.sh --prefix=$HOME/local
./b2 install
```
#### Create project
Create a C++ project by creating a directory structure, source files, and a Makefile for easy compilation.
1. Set up project structure:
```
mkdir my_cpp_project
cd my_cpp_project
mkdir src
```
2. Create a C++ file that uses the library:
```
cat << EOF > src/main.cpp
#include <iostream>
#include <boost/version.hpp>
#include <boost/algorithm/string.hpp>
int main() {
std::cout << "Boost version: "
<< BOOST_VERSION / 100000 << "."
<< BOOST_VERSION / 100 % 1000 << "."
<< BOOST_VERSION % 100 << std::endl;
std::string str = "Hello, World!";
boost::to_upper(str);
std::cout << "Uppercase: " << str << std::endl;
return 0;
}
EOF
```
3. Create a Makefile for easy compilation:
```
cat << EOF > Makefile
CXX = g++
CXXFLAGS = -I$(HOME)/local/include
LDFLAGS = -L$(HOME)/local/lib
LIBS = -lboost_system
main: src/main.cpp
$(CXX) $(CXXFLAGS) $(LDFLAGS) $^ $(LIBS) -o $@
.PHONY: clean
clean:
rm -f main
EOF
```
4. Compile using:
```
make
```
5. Clean using:
```
make clean
```
#### Using the project
To run your compiled C++ program:
```
./main
```
#### Updating libraries
To update a library, you can download the new version, compile, and install it similar to the initial installation process. For example, to update Boost:
1. Download and extract the new version
2. Navigate to the extracted directory
3. Run the installation commands again:
```
./bootstrap.sh --prefix=$HOME/local
./b2 install
```
#### Deleting the environment
In C++, there's no virtual environment to delete. However, you can remove your local installations and project files:
1. Remove local installations:
```
rm -rf ~/local
```
2. Remove project directory:
```
rm -rf my_cpp_project
```
Remember to also remove or comment out the environment variable settings in your `~/.bashrc` or `~/.bash_profile` if you no longer need them.
## Rust
### How to simulate a virtual environment with rust
Rust uses Cargo, its built-in package manager and build system, to manage dependencies and create isolated project environments. Unlike some languages that require separate tools for virtual environments, Rust's approach integrates this functionality directly into its core toolchain. Each Rust project, initialized with cargo new, creates a self-contained environment with its own Cargo.toml file for declaring dependencies and build configurations. Cargo handles downloading, compiling, and linking of dependencies, ensuring that each project has its own isolated set of packages. Below are the directions for creating the development environment in rust.
#### Setup Environment
Create a new directory and intialize it with Cargo. This creates a new Rust project with its own Cargo.toml file for managing dependencies.
1. Ensure Rust and Cargo are installed:
```bash
rustc -V
cargo -V
```
2. Create directory for project:
```bash
mkdir Project
```
3. Navigate to directory:
```bash
cd Project
```
4. Initialize a Cargo project:
```bash
cargo new my_project
cd my_project
```
#### Install Packages
Installing packages in a Rust project adds them to your Cargo.toml file and downloads them to a local cache, keeping your project dependencies isolated and easily reproducible.
Install a package:
```bash
cargo add package_name
```
#### Using Dependencies
Building your Rust project with Cargo automatically downloads and compiles all necessary dependencies specified in your Cargo.toml file.
Install packages using Cargo.toml:
```bash
cargo build
```
#### Delete virtual environment
In Rust, there isn't a traditional "virtual environment" to delete. Rust's Cargo.toml handles this by default. To "delete" this environment:
1. Remove the entire project directory:
```bash
cd ..
rm -rf my_project
```
This will remove the project files, Cargo.toml, and all compiled artifacts.
Note: If you want to keep the source code but remove all compiled artifacts and dependencies, you can instead run:
```bash
cargo clean
```
This will remove the target directory, which contains all compiled files and downloaded dependencies.
......@@ -48,6 +48,20 @@ Rsync is a particularly useful tool and is recommended for transferring files to
When transferring very large files or datasets, it is advised to use rsync and to calculate and confirm checksums to ensure data integrity.
## Cyberduck
Cyberduck is a file transfer application with an intuitive graphical interface for transfering files to or from a remote machine. Cyberduck is available for both Windows and Mac. Download it from [cyberduck.io](https://cyberduck.io/).
Click "Open Connection" and a new window will be displayed like below. Select "SFTP (SSH File Transfer Protocol)" from the top dropdown menu. Enter the server, port number, your username, and Linux Lab password. Then click "Connect".
![3-connection.png]({{ site.baseurl }}/images/cyberduck_setup_images/3-connection.png "3-connection.png")
If you see a window asking about an "Unknown fingerprint", click "Always" and then "Allow".
![4-fingerprint.png]({{ site.baseurl }}/images/cyberduck_setup_images/4-fingerprint.png "4-fingerprint.png")
You should now be able to see your user's home directory on the cluster. You can transfer files to and from it by dragging and dropping files between this window and your "Finder" windows.
## Network Interfaces and Bandwidth
All file transfer access to the Star HPC Cluster is currently through the login node's 1GbE interface. Users should be aware of potential bandwidth limitations, especially when transferring large amounts of data.
......@@ -55,3 +69,4 @@ All file transfer access to the Star HPC Cluster is currently through the login
## User Authentication and Permissions
File transfers are authenticated in the same way as SSH access. SSH keys are the preferred method for secure authentication, although password authentication is currently allowed. Plans for implementing Multi-Factor Authentication (MFA) are being considered for future security enhancements.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment