11+ TREX Admin Jobs in Mumbai | TREX Admin Job openings in Mumbai
Apply to 11+ TREX Admin Jobs in Mumbai on CutShort.io. Explore the latest TREX Admin Job opportunities across top companies like Google, Amazon & Adobe.

- Administers overall setup, configuration and maintenance of the Salesforce.com platform for the various divisions
- Performs system administration functions such as user management (profiles and roles), field and validation rule configuration, record types, picklists, page layout management, mobile setup, data management, email templates, folder management, and public groups, as well as other configuration items
- Validation rules, Reports, Workflow, Process Builder as per business requirements
- Responsible to analyze, troubleshoot and solve the delivery/production (tier 2 or tier 3) team reported issues.
- Partner with Product Development to resolve customer reported bugs.
- Ability to reliably and correctly estimate your time to resolve technical problems and a clear understanding of your limitations for escalation purposes.
- Meet individual performance goals, including weekly and/or annual efficiency goals.
- Should have good communication skill.
- Experience Level – 5-6 years in similar profile.
- Team player willing to collaborate throughout all phases of development, testing and deployment
- Ability to solve problems and meet the deadlines within minimal supervision
- Excellent analytical skills
- We take transparency very seriously. Along with a full view of team goals, get a top-level view across the board with our monthly & quarterly town hall meetings.
- A highly inclusive work culture that promotes a relaxed, creative and productive environment.
- Practice autonomy, open communication, growth opportunities,while maintaining a perfect work-life balance
- Go on company-sponsored offsites, once a year and blow off steam with your work buddies! (Post Pandemic)
Perks & Benefits:
- Learning is a way of life. Unlock your full potential backed with cutting-edge tools and mentorship (Macbook for Engagers & reimbursement for your WFH setup!)
- Get the best in class medical insurance (with Covid Care facilities), programs for taking care of your mental health, and a Contemporary Leave Policy (beyond sick leaves)
Lightning Job By Cutshort ⚡
As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)
Job Overview:
We are seeking an experienced DevOps Engineer to join our team. The successful candidate will be responsible for designing, implementing, and maintaining the infrastructure and software systems required to support our development and production environments. The ideal candidate should have a strong background in Linux, GitHub, Actions/Jenkins, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
Responsibilities:
• Design, implement and maintain CI/CD pipelines using GitHub, Actions/Jenkins, Kubernetes, Helm, and ArgoCD.
• Deploy and manage Kubernetes clusters using AWS.
• Configure and maintain Envoy Proxy and Cert-Manager to automate deployment and manage application environments.
• Monitor system performance using Datadog, ELK, and Cloudflare tools.
• Automate infrastructure management and maintenance tasks using Terraform, Ansible, or similar tools.
• Collaborate with development teams to design, implement and test infrastructure changes.
• Troubleshoot and resolve infrastructure issues as they arise.
• Participate in on-call rotation and provide support for production issues.
Qualifications:
• Bachelor's or Master's degree in Computer Science, Engineering or a related field.
• 4+ years of experience in DevOps engineering with a focus on Linux, GitHub, Actions/CodeFresh, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
• Strong understanding of Linux administration and shell scripting.
• Experience with automation tools such as Terraform, Ansible, or similar.
• Ability to write infrastructure as code using tools such as Terraform, Ansible, or similar.
• Experience with container orchestration platforms such as Kubernetes.
• Familiarity with container technologies such as Docker.
• Experience with cloud providers such as AWS.
• Experience with monitoring tools such as Datadog and ELK.
Skills:
• Strong analytical and problem-solving skills.
• Excellent communication and collaboration skills.
• Ability to work independently or in a team environment.
• Strong attention to detail.
• Ability to learn and apply new technologies quickly.
• Ability to work in a fast-paced and dynamic environment.
• Strong understanding of DevOps principles and methodologies.
Kindly apply at https://www.wohlig.com/careers
At Egnyte we build and maintain our flagship software: a secure content platform used by companies like Red Bull and Yamaha.
We store, analyze, organize, and secure billions of files and petabytes of data with millions of users. We observe more than 1M API requests per minute on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work from start to finish are integral. Our Engineers are part of the process from design to code, to test, to deployment, and back again for further iterations.
We have 300+ engineers spread across the US, Poland, and India.
You will be part of our DevOps Team working closely with our DBA team in automating, monitoring, and scaling our massive MySQL cluster. Previous MySQL experience is a plus.
Your day-to-day at Egnyte
- Designing, building, and maintaining cloud environments (using Terraform, Puppet or Kubernetes)
- Migrating services to cloud-based environments
- Collaborating with software developers and DBAs to create a reliable and scalable infrastructure for our product.
About you
- 2+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes
- Programming prowess (Python, Java, Ruby, Golang, or JavaScript)
- Experience with databases (MySQL or Postgress or RDS/Aurora or others)
- Experience with public cloud services (GCP/AWS/Azure)
- Good understanding of the Linux Operating System on the administration level
- Preferably you have experience with HA solutions: our tools of choice include Orchestrator, Proxysql, HAProxy, Corosync & Pacemaker, etc.
- Experience with metric-based monitoring solutions (Cloud: CloudWatch/Stackdriver, On-prem: InfluxDB/OpenTSDB/Prometheus)
- Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude)
At Egnyte we build and maintain our flagship software: a secure content platform used by companies like Red Bull and Yamaha.
We store, analyze, organize, and secure billions of files and petabytes of data with millions of users. We observe more than 1M API requests per minute on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work from start to finish are integral. Our Engineers are part of the process from design to code, to test, to deployment, and back again for further iterations.
We have 300+ engineers spread across the US, Poland, and India.
You will be part of the Site Reliability Engineering Team. This role involves the design, scale, performance tuning, monitoring, and administration activities on the various databases (majority on MySQL).
Your day-to-day at Egnyte:
- Build, scale, and administer a large fleet of MySQL servers spread over multiple data centers with a focus on performance, scale, and high availability.
- Monitor and troubleshoot critical performance bottlenecks for MySQL databases before it causes downtime.
- Review and assess the impact of database schema design, topology changes prior to their implementation
- Ensure that databases are secured, maintained, backed up, and highly available.
- Review stress testing results and provide recommendations to development teams
- Automate anomaly detection to surface databases with failures, IOPS, deadlocks, and other failure reasons.
- Automate management tasks, streamline processes, and perform standard administrative functions
About you:
- Understanding of MySQL’s (5.7+) underlying storage engines
- Knowledge of Performance and scalability issues with MySQL
- Strong experience with MySQL HA using Orchestrator/ProxySQL/Consul/Pacemaker
- Experience with configuration management like Puppet/Ansible
- Knowledge of limitations in MySQL and their workarounds in contrast to other popular relational databases
- Automation experience with ‘Python/Ruby/Perl’ and ‘SQL’ scripting
- Analytical skills necessary to perform troubleshooting of errors and performance issues on a large array of Mysql cluster spread over multiple data centers.
Due to the nature of the role, it would be nice if you have also:
- Experience in other distributed systems like Redis, Elasticsearch, Memcached.
- Experience in managing a large fleet of database servers.
- Knowledge of HA and scalability issues with PostgreSQL
Responsibilities
- Implement various development, testing, automation tools, and IT infrastructure
- Design, build and automate the AWS infrastructure (VPC, EC2, Networking, EMR, RDS, S3, ALB, Cloud Front, etc.) using Terraform
- Manage end-to-end production workloads hosted on Docker and AWS
- Automate CI pipeline using Groovy DSL
- Deploy and configure Kubernetes clusters (EKS)
- Design and build a CI/CD Pipeline to deploy applications using Jenkins and Docker
Eligibility
- At least 8 years of proven experience in AWS-based DevOps/cloud engineering and implementations
- Expertise in all common AWS Cloud services like EC2, EKS, S3, VPC, Lambda, API Gateway, ALB, Redis, etc.
- Experience in deploying and managing production environments in Amazon AWS
- Strong experience in continuous integration and continuous deployment
- Knowledge of application build, deployment, and configuration using one of the tools: Jenkins
3-5 years of experience in DevOps, systems administration, or software engineering roles.
B. Tech. in computer science or related field from top tier engineering colleges.
Strong technical skills in software development, systems administration, and cloud infrastructure management.
Experience with infrastructure-as-code tools such as Terra form or Cloud Formation.
Experience with containerization technologies such as Dockers and Kubernetes.
Experience with cloud providers such as AWS or Azure.
Experience with scripting languages such as Bash or Python.
Strong problem-solving and analytical skills.
Strong communication and collaboration skills
Navi Mumbai 1 - 3 years
Rejolut is among the fastest-growing and award-winning Tech companies working on leading
technologies namely Blockchain, Machine Learning & Artificial Intelligence, Complex mobile & Web
Apps, IoT, etc.Rejolut is a venture-backed company with clients in over several countries namely
Malaysia Airlines,gba global,my-earth,biomes, Dlg-hub,etc.
We are looking for Tech geeks having hands-on experience and in love with building scalable,
distributed and large web/mobile products and tech solutions. He/She must be an excellent problem
solver with passion to self-learn and implement web technologies (frontend + backend). He/She would
be responsible for the architecture design, code review, and technology build and deployment
activities of the product.
Key Skills For DevOps Engineer:
Background in linux\unix systems
Experience with DevOps techniques and philosophies
Experience with automation of code builds and deployments
Knowledge/Experience of CI\CD pipelines to a wide variety of virtual environments in private
and public cloud providers (Jenkins / Hudson, Docker, Kubernetes, Azure, AWS)
Knowledge/Experience of software configuration management systems and source code
version control systems (Jenkins, Bitbucket, consul, vagrant, Chef, Puppet, Gerrit)
Passion to work in an exciting fast paced environment
Self-starter who can implement with minimal guidance
High Availability: Load Balancing (ELB), Reverse Proxies, CDNs etc.
AWS core components (or their GCP/Azure equivalents) and their management: EC2, ELB,
NAT, VPC, IAM Roles and policies, EBS and S3, CloudFormation, Elasticache, Route53, etc.
Work with K8s and Docker is plus
High social and communication skills
Job Responsibilities:
Gather and analyse cloud infrastructure requirements
Automate
Support existing infrastructure, analyse problem areas and come up with solutions
Optimise stack performance and costs
An eye for monitoring. The ideal candidate should be able to look at complex infrastructure
and be able to figure out what to monitor and how.
Rejolut - As a Career Differentiator
- We are a young and dynamic team who are obsessed with solving futuristic and evolutionary
business problems at scale with the next generation technology like blockchain, crypto and machine
learning. Focuses on empowering people across the globe to be technically efficient, making
advancements in technology and providing new capabilities that were previously thought impossible.
- We provide exposure to higher learning opportunities so that you can work on complex and cutting
edge technology like React, React Native, Flutter, NodeJS, Python, Go, Svelte, WebAssembly.
Strong expertise in blockchain and crypto technology and working with the networks like Hedera
Hashgraph, Tezos, BlockApps, Algorand, Cardano.
- Company is backed by two technology Co-founders, well-versed with consumer applications and
their work has been downloaded millions of times and have led teams in leadership positions in
companies like Samsung, Purplle, Loylty Rewardz.
Benefits :
> Health Insurance
> Fast growth and more visibility into the company
> Experience to work on the latest technology
> Competitive Learning Environment with supportive co-workers
> Employee friendly HR Policies
> Paid leaves up to certain limits
> Competitive salaries & Bonuses
> Liberal working atmosphere
> Get mentored by the best in the industry
Schedule:
Day Shift/Flexible working hours
Monday to Friday


Summary
We are building the fastest, most reliable & intelligent trading platform. That requires highly available, scalable & performant systems. And you will be playing one of the most crucial roles in making this happen.
You will be leading our efforts in designing, automating, deploying, scaling and monitoring all our core products.
Tech Facts so Far
1. 8+ services deployed on 50+ servers
2. 35K+ concurrent users on average
3. 1M+ algorithms run every min
4. 100M+ messages/min
We are a 4-member backend team with 1 Devops Engineer. Yes! this is all done by this incredible lean team.
Big Challenges for You
1. Manage 25+ services on 200+ servers
2. Achieve 99.999% (5 Nines) availability
3. Make 1-minute automated deployments possible
If you like to work on extreme scale, complexity & availability, then you will love it here.
Who are we
We are on a mission to help retail traders prosper in the stock market. In just 3 years, we have the 3rd most popular app for the stock markets in India. And we are aiming to be the de-facto trading app in the next 2 years.
We are a young, lean team of ordinary people that is building exceptional products, that solve real problems. We love to innovate, thrill customers and work with brilliant & humble humans.
Key Objectives for You
• Spearhead system & network architecture
• CI, CD & Automated Deployments
• Achieve 99.999% availability
• Ensure in-depth & real-time monitoring, alerting & analytics
• Enable faster root cause analysis with improved visibility
• Ensure a high level of security
Possible Growth Paths for You
• Be our Lead DevOps Engineer
• Be a Performance & Security Expert
Perks
• Challenges that will push you beyond your limits
• A democratic place where everyone is heard & aware
At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.
A few recognitions:
- Recognized as Top25 startups in India to work with 2019 by LinkedIn
- Winner of HDFC Bank's Digital Innovation Summit 2020
- Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
- Winner of Amazon AI Award 2019 for Fintech
- Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
- Winner of FinShare 2018 challenge held by ShareKhan
- Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
- 2nd place Citi India FinTech Challenge 2018 by Citibank
- Top 3 in Viacom18's Startup Engagement Programme VStEP
What your average day would look like:
- Deploy and maintain mission-critical information extraction, analysis, and management systems
- Manage low cost, scalable streaming data pipelines
- Provide direct and responsive support for urgent production issues
- Contribute ideas towards secure and reliable Cloud architecture
- Use open source technologies and tools to accomplish specific use cases encountered within the project
- Use coding languages or scripting methodologies to solve automation problems
- Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
- Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time
What you need to work with us:
- Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
- Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
- Proficiency in Unix Operating systems and comfortable with Networking concepts
- Experience with developing/deploying a scalable system
- Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
- Experience in managing Hadoop clusters
- Understanding of containers and have managed them in production using container orchestration services.
- Solid understanding of data structures and algorithms.
- Applied exposure to continuous delivery pipelines (CI/CD).
- Keen interest and proven track record in automation and cost optimization.
Experience:
- 1-4 years of relevant experience
- BE in Computer Science / Information Technology