VMware Horizon / View, VMware vSphere, NSX • Azure Cloud VDI/AVD – Azure Virtual Desktop • VMware Virtualization • Hyper converged Infrastructure, VSAN, Storage devices SAN/NAS etc.
Roles and Responsibilities
- Designing, implementing, testing and deployment of the virtual desktop infrastructure
- Facilitating the transition of the VDI solution to Operations, providing operational and end user support when required.
- Acting as the single point of contact for all technical engagements on the VDI infrastructure.
- Creation of documented standard processes and procedures for all aspects of VDI infrastructure, administration and management.
- Working with the vendor in assessing the VDI infrastructure architecture from a deployment, performance, security and compliance perspective.
Knowledge around security best practices and understanding of vulnerability assessments.
- Providing mentoring and guidance to other VDI Administrators.
Document designs, development plans, operations procedures for the VDI solution.
- You would build, monitor, maintain and enhance a cutting-edge AI platform and system infrastructure
- You would help run the infrastructure that can serve 1000s of customers and millions of data requests per hour.
- You would manage data configuration, setup, tracking, reporting etc.
- You would troubleshoot infrastructure problems quickly and efficiently
- You would manage our AWS deployment, build CI/CD pipeline and enhance network security and scale
- While there would be a Lead Engineer to guide you, you should be able to manage the system and improve the processes largely by yourself
- You would also work closely with the Product Owner to build engineering capabilities that align with the evolution of the product.
What Really Matters
- Very strong knowledge in systems, networks and cloud space
- DevOps experience with AWS, GC or Azure, Docker, CI / CD tools; scripting languages like Python
- Practical knowledge and experience in deploying and managing big data solutions on a cloud platform like AWS or Google Cloud
- Should have played a role in running at least one backend web app/product
- Your values, ethics and character
- Love for running large-scale systems, complex deployments, programming and computers in general
What Matters A Little
- Good knowledge of state-of-the-art web service architectures and associated technologies
- Solid familiarity with Elastic Search or NoSQL technologies like MongoDB/HBase/Cassandra/Redis/
- A strong recommendation by a technology person that we (would) have a lot of respect for
- Your knowledge and interest in big data and Machine Learning space
- The school that you come from
- The organizations where you have worked earlier
What Does Not Matter
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
TIFIN is a fintech company backed by industry leaders including JP Morgan, Morningstar, Broadridge and Hamilton Lane.
We build engaging experiences through powerful AI and personalization. We leverage the combined power of investment intelligence, data science, and technology to make investing a more engaging experience and a more powerful driver of financial wellbeing.
At TIFIN, design and behavioral thinking enables engaging customer centered experiences along with software and application programming interfaces (APIs). We use investment science and intelligence to build algorithmic engines inside the software and APIs to enable better investor outcomes.
We hope to change the world of wealth in ways that personalized delivery has changed the world of movies, music and more. In a world where every individual is unique, we believe in the power of AI-based personalization to match individuals to financial advice and investments is necessary to driving wealth goals
- Shared Understanding through Listening and Speaking the Truth. We communicate with radical candor, precision and compassion to create a shared understanding. We challenge, but once a decision is made, commit fully. We listen attentively, speak candidly.
- Teamwork for Teamwin. We believe in win together, learn together. We fly in formation. We cover each other’s backs. We inspire each other with our energy and attitude.
- Make Magic for our Users. We center around the voice of the customer. With deep empathy for our clients, we create technology that transforms investor experiences.
- Grow at the Edge. We are driven by personal growth. We get out of our comfort zone and keep egos aside to find our genius zones. We strive to be the best we can possibly be. No excuses.
- Innovate with Creative Solutions. We believe that disruptive innovation begins with curiosity and creativity. We challenge the status quo and problem solve to find new answers.
WHAT YOU'LL BE DOING:
As part of TIFIN’s technology division, you will be leading the DevOps function of a software product demonstrating leadership abilities
WHO ARE YOU:
- Lead the end-to-end technology/infrastructure environments
- Troubleshoot any issues that arise from deployments and other automations
- Setup and manage security configurations
- Implement systems/tools/processes to monitor performance and security integrity of the technology stack
- Implementing CI/CD from our source control platform (e.g. gitlab)
- Develop automation tools and dashboards to manage and monitor the infrastructure
- Provide technical guidance during software development
- Stay current with industry trends and source new ways for our business to improve
- Setup / decommission technology assets. Maintain the asset and configuration
- Maintain inventory of the relevant environments
- 5+ years of experience with substantial experience in a DevOps Engineering/Lead role
- Strong experience designing and implementing highly available, scalable solutions
- Expertise in planning/implementing BCP / DR policies in line with company objectives
- Strong experience with Linux servers and their administration/troubleshooting
- Strong understanding of Networking concepts and best practices
- Working experience with Docker and Kubernetes
- Hands-on experience with AWS & GCP services like VPC, EC2, S3, ELB, RDS, ECS/EKS, IAM, CloudFront, CloudWatch, SQS/SNS, App Engine, etc.
- Strong experience with databases such as PostgreSQL and Redis
- Knowledge of scripting languages such as Python and Bash
- Expertise in Git (GitHub/ GitLab)
- Experience working with Data Lakes and ETL pipelines
- Experience with project workflow tools such as Jira in an Agile-Scrum environment
- Experience with open-source technologies and cloud services
- Strong communication & interpersonal skills and ability to explain protocol and processes to the team
- Strong troubleshooting skills with the ability to spot issues before they become problems
COMPENSATION AND BENEFITS PACKAGE:
Competitive and commensurate to experience + discretionary annual bonus + ESOPs
About the Tifin Group: The Tifin group combines expertise in finance, technology, entrepreneurship and investing to start and help build a portfolio of brands and companies in areas of investments, wealth management and asset management.
TIFIN companies are centered around the user and emphasize design innovation to build operating systems. We focus on simplifying and democratizing financial science to make it more holistic and integral to users’ lives.
As a Cloud Infrastructure Consultant, you will be responsible to define the cloud platforms to a detailed level and the cloud designs and environments. You will design and develop the infrastructure technical architecture for different customers and also work with Microsoft closely.
- Plan, architect, configure and deploy large scale cloud infrastructure in Azure for different services.
- Works as part of engineering teams and on pre-sales, project and account-based assignments to design technical solutions as per project requirements
- Produce end-to-end solution designs, putting together technologies from multiple IT systems and departments across either the application or infrastructure domains
- Must have detailed knowledge and experience of one or more application or infrastructure domains and have the ability to clearly document and communicate the domain architecture
- Ensures technical quality and assurance by participating in Architecture Governance and Technical Design peer review processes, working closely with customer and internal stakeholders as appropriate
- Oversees coordination of the Solution Manager/Solution Architect, and teams up with other Functional or Technical Architects.
- Experience and working knowledge with Microsoft AWS Infrastructure as a Service platform, including planning, configuration, optimization and deployment.
- Strong programming experience in Python, Shell Script & Linux Programming
- Experience in AKS, Docker DevOps pipeline( Mandatory)
- Experience in Managing the AWS Directory service, Domain, Certificates, Compute, Networking & Storage Services
- Strong experience in managing the MongoDB, Dynamo DB cluster etc
- Experience managing cloud/data center operations, including governance, monitoring, alerting and notifications.
- Monitoring and Logging experience in Prometheus/Grafana, ELK Stack.
- Experience in design and implementation of CI/CD pipelines for nodejs, angular, android based projects etc.
Good to have:
- Passion for writing great, simple, clean, efficient code
- Should be a very fast learner and have excellent problem-solving capabilities
- Should have excellent written and verbal communication skills
- Experience working in large-scale distributed systems in a plus
- Should be able to independently design and build components for the automation platform
- Should assist in maintenance of the tools and troubleshooting the issue
Exp : 3.0 Year+
Location : Noida, Sec- 62 ( currently WFH )
Working Days: 5 Days
- Experience with DevOps environments and the role of supporting software development, deployment and maintenance.
- Strong knowledge of Linux and windows System (Linux and DevOps certificate will be added advantage).
- Experience with implementing container technology like Docker and Kubernetes is mandatory.
- Strong knowledge of management of SQL Servers like PostgreSQL and MySQL, MongoDB
- Experience in installation, configuration of continuous monitoring tool like Nagios, Zabbix, etc.
- Strong knowledge of monitoring of various kind of log and understanding.
- Excellent knowledge on AWS, Azure and GCP cloud, infrastructure and implement the best practices.
- Hands on experience in install and configuration of various webservers, loadbalancer and SSL certificate
- Working experience of DevOps on AWS Kubernetes environment.
- Working Experience with Auto scaling and performance optimization in AWS Services (EC2, AWS RDS, ECS, EKS, IAM Role etc.)
- Experience with Cloud load balancing and failover concepts.
- Expert knowledge of cloud security trends and best practices.
- Experience to write development scripts in AWS environment using Python, Bash, etc
- Hand on experience to write the YAML, JSON file for Docker environment
- knowledge of deployment tools like Ansible.
- Experience configuration management platforms (e,g. Jenkins, Chef, Puppet etc.)
- Understanding of source code version control tools like Git, and bitbucket.
• Involved in entire SDLC lifecycle including analysis, development, fixing and monitoring of issues on the assigned product lines.
• Meets and exceeds standards for the quality and timeliness of the work products.
• Implements, unit tests, debugs and leads integrations of complex code.
• Identify opportunities for further enhancements and refinements to best practices, standards and processes. • Ensure robust, securely accessible, highly available and highly scalable product that meets or exceeds customer and end-user expectations
Experience 3 – 6 Years
Technical Duties & Responsibilities
With 2-4 years of experience in Scalable Architecture development. We are looking for Independent Contributors, who have good understanding of Microservices based architecture, and a comprehensive awareness of various architectures & their suitability as per product requirements: -
• Strong knowledge of microservices based architecture model.
• Good knowledge of messaging frameworks like RabbitMQ. Prefer a candidate who had earlier used messaging for a chat type solution or developed high transaction messaging queues.
• Should have experience in Elastic Search and Kibana. • Good understanding of RESTful architecture, database technologies (both SQL - and NoSQL).
• Ability to understand and solve performance issues and constraints.
• Should have understanding of scaling applications, and know the bottlenecks involved in resource optimization.
• Proficient in MS concepts like Asp.net, C#, MVC, webservices (development and consumption), API development
• Should have experience in Agile, iterative and Scrum based projects.
• Should have worked in a Test Driven Development (TDD) environment. Good knowledge of Object Oriented programming, Design Principles like SOLID, and Gang of Four Design patterns.
About the Role
Dremio’s SREs ensure that our internal and externally visible services have reliability and uptime appropriate to users' needs and a fast rate of improvement. You will be joining a newly formed team that will spearhead our efforts to launch a cloud service. This is an opportunity to join a very fast growth startup and help build a cloud service from the ground up.
Responsibilities and Ownership
- Ability to debug and optimize code and automate routine tasks.
- Evangelize and advocate for reliability practices across our organization.
- Collaborate with other Engineering teams to support services before they go live through activities such as system design consulting, developing software platforms and frameworks, monitoring/alerting, capacity planning and launch reviews.
- Analyze and optimize our core product by developing and implementing reliability and performance practices.
- Scale systems sustainably through automation and evolve systems by pushing for changes that improve reliability and velocity.
- Be on-call for services that the SRE team owns.
- Practice sustainable incident response and blameless postmortems.
- 6+ years of relevant experience in the following areas: SRE, DevOps, Cloud Operations, Systems Engineering, or Software Engineering.
- Excellent command of cloud services on AWS/GCP/Azure, Kubernetes and CI/CD pipelines.
- Have moderate-advanced experience in Java, C, C++, Python, Go or other object-oriented programming languages.
- You are Interested in designing, analyzing and troubleshooting large-scale distributed systems.
- You have a systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
- You have a great ability to debug and optimize code and automate routine tasks.
- You have a solid background in software development and architecting resilient and reliable applications.
- You will manage all elements of the post-sale program relationship with your customers, starting with customer on-boarding and continuing throughout the customer relationship.
- As the primary customer interface, you engage with customer teams to educate, identify needs, develop designs, set goals, manage and execute on plans that unlock continuous, incremental value from their investments in the CloudPassage Halo platform.
- You are hands-on during execution and thoroughly enjoy seeing your security projects come to life and supporting them afterwards. You are a trusted adviser.
- Manage a portfolio of 5+ Enterprise customer accounts with complex needs (typical enterprise customers invest between $500k and $4m+ per year with CloudPassage, have hundreds to tens of thousands of individual public cloud infrastructure deployments, and protect hundreds of thousands of cloud infrastructure assets with Halo).
- Provide level-3 technical support on your customer's most complex issues
- Lead implementation of low-level security controls in Cloud environments, for services, server and containers
- Remotely diagnose & resolve DevSecOps issues in customer environments - able to resolve their DevOps issues that may be interfering with CloudPassage processing.
- Interact with CloudPassage Engineering team by providing customer issue reproduction and data capture, technical diagnostics and validating fixes. QA experience preferred.
- Establish and program manage proactive, value-driven, high-touch relationships with your customers to understand, document and align customer strategies, business objectives, designs, processes and projects with Halo platform capabilities and broader CloudPassage services.
- Develop a trusted advisor relationship by building and maintaining appropriate relationships at all levels with your customer accounts, creating a premium and high-caliber experience.
- Ensure continued satisfaction, identify & confirm unaddressed customer needs that can be value-add opportunities for up-sell and cross-sell, and communicate those needs to the CloudPassage sales team. Identify any early CSAT issues and renewal risks and work with the internal team to remediate and ensure strong CSAT and a successful renewal.
- Be a strong customer advocate within CloudPassage and identify and support areas for improvement in the customer experience, both in our product and processes.
- Be team-oriented, but with a bias towards action to get things done for your customers.
Requirements : Strong cloud security knowledge & experience including :
- End-to-end enterprise security processes
- Cloud security - cloud migrations & shift in security requirements, tooling & approach
- Hands-on DevOps, DevSecOps architecture & automation (critical)
- 4+ years experience in security consulting and project/program management serving cybersecurity customers.
- Complex, level 3 technical support
- Remotely diagnosing & resolving DevSecOps issues in customer environments
- Interacting with CloudPassage Engineering team with customer issue reproduction
- Experience working in a security SaaS company in a startup environment.
- Experience working with Executive and C-Level teams.
- Ability to build and maintain strong relationships with internal and external constituents.
- Excellent organization, project management, time management, and communication skills.
- Understand and document customer requirements, map to product, track & report metrics, identify up-sell and cross-sell opportunities.
- Analytical both quantitatively and qualitatively.
- Excellent verbal and written communication skills.
- Security certifications (Security +, CISSP, etc.).
Expert Technical Skills :
- Consulting and project management : documenting project charters, project plans, executing delivery management, status reporting. Executive-level presentation skills.
- Security best practices expertise : software vulnerabilities, configuration management, intrusion detection, file integrity.
- System administration (including Linux and Windows) of cloud environments : AWS, Azure, GCP; strong networking/proxy skills.
Proficient Technical Skills :
- Configuration/Orchestration (Chef, Puppet, Ansible, SaltStack, CloudFormation, Terraform).
- CI/CD processes and environments.
Familiar Technical Skills & Knowledge : Python scripting & REST API's, Docker containers, Zendesk & JIRA.
Job title: Azure Architect
Locations: Noida, Pune, Bangalore and Mumbai
- Develop and maintain scalable architecture, database design and data pipelines and build out new Data Source integrations to support continuing increases in data volume and complexity
- Design and Develop the Data lake, Data warehouse using Azure Cloud Services
- Assist in designing end to end data and Analytics solution architecture and perform POCs within Azure
- Drive the design, sizing, POC setup, etc. of Azure environments and related services for the use cases and the solutions
- Reviews the solution requirements support architecture design to ensure the selection of appropriate technology, efficient use of resources and integration of multiple systems and technology.
- Must possess good client-facing experience with the ability to facilitate requirements sessions and lead teams
- Support internal presentations to technical and business teams
- Provide technical guidance, mentoring and code review, design level technical best practices
- 12-15 years of industry experience and at least 3 years of experience in architect role is required along with at least 3 to 4 years’ experience designing and building analytics solutions in Azure.
- Experience in architecting data ingestion/integration frameworks capable of processing structured, semi-structured & unstructured data sets in batch & real-time
- Hands-on experience in the design of reporting schemas, data marts and development of reporting solutions
- Develop batch processing, streaming and integration solutions and process Structured and Non-Structured Data
- Demonstrated experience with ETL development both on-premises and in the cloud using SSIS, Data Factory, and Azure Analysis Services and other ETL technologies.
- Experience in Perform Design, Development & Deployment using Azure Services ( Azure Synapse, Data Factory, Azure Data Lake Storage, Databricks, Python and SSIS)
- Worked with transactional, temporal, time series, and structured and unstructured data.
- Deep understanding of the operational dependencies of applications, networks, systems, security, and policy (both on-premise and in the cloud; VMs, Networking, VPN (Express Route), Active Directory, Storage (Blob, etc.), Windows/Linux).
Mandatory Skills: Azure Synapse, Data Factory, Azure Data Lake Storage, Azure DW, Databricks, Python