Min 5 to 8 years of IT experience, including experience in developing and implementing big data & Azure cloud solution Minimum 3 years of experience in cloud technology (AWS) Strong hands on knowledge on Spark (with Python as language) End to end implementation experience in data analytics solutions (data ingestion, processing, provisioning and Orchestration Strong experience in AWS ecosystem such as glue, Lambda, RDS, Redshift, IAM, S3, Shield Strong SQL and Shell script knowledge Hands on experience developing enterprise solutions using designing and building frameworks, enterprise patterns, database design and development End to end Cloud solution on AWS (glue, Lambda, RDS, Redshift, IAM ,S3, Shield) Batch solution and distributed computing using ETLELT (Spark SQL Spark Data frame ADF) Implementation of data encryption at rest and in transit DWBI (MSBI, Oracle, Sql Server), Data modelling, performance tuning, memory optimization DB partitioning Frameworks, reusable components, accelerators, CICD automation Mentor and lead a data engineering teams to design, develop, test and deploy high performance data analytics solutions Key Skills: AWS Native scripting components (Glue, RDS, Redshift) and Spark
Full Stack Developer - Looking for top 5% talent from growth startups About the Company: Torre Capital is a VC-Funded Fintech based out of Singapore focused on disrupting the Asset Management Industry. We are backed by prestigious VC firms and a network of angels and industry leaders, with over 18 months of runway. We are creating new ways to interact and service High Net Worth Individuals using latest tech and analytics interventions. Torre Capital was started by professionals (Mckinsey, Accenture, Flipkart, PayU, CIMB) with 50+ years of experience in Digital Business Build and Asset Management. Job Description: We are looking for an experienced Full stack Engineer with 3-7 years of programming experience in a product development team. You are in top 5% of your field, and are passionate about creating new products and innovative code. We are building a solution that deploys elements of tokenization using blockchain, ML driven portfolio selection and risk management, and a proprietary alternate secondary market for illiquid securities. You will work on relevant and cutting-edge technologies. Key Requirements BE/ BTech or MTech/ Dual degree in Computer Science/Software engineering or similar from a tier 1 institute Fullstack development (Node.js or Python + React + React Native + GCP/AWS) Writing clean, semantic HTML, CSS working across all popular browsers Experience with typescript SQL/NoSQL database design, management and optimization Good at Functional style programming You have worked with and built REST APIs & GraphQL Experience with continuous deployment, familiar with AWS and Devops Experience implementing tests at all levels – unit, integration, E2E. TDD believer. Strong fundamental knowledge of algorithms, data structures, design patterns and network protocols. Previous work experience of at least 2 years in a similar role with an established startup Ability to deep dive into problem-solving and build elegant, maintainable solutions to complex problems Excellent interpersonal skills Deep knowledge of at least one backend frameworks What we offer: Excellent fixed compensation with milestone linked bonuses Flexible work hours, family-friendly Independent work culture, no micro management Significant ESOPs in a fast-growing startup
Hi Hiring for Node.js Developer Exp:2 yrsWork location: HSR Layout Notice Period: Immediate to 30 daysResponsibilities and Duties:Ability to design, develop and document scalable SaaS-based applications and APIs.● Experience and Ability to write robust code in C is an added advantage.● Experience with windows, Linux. Ability to understand software development kit.● Experienced in building RESTful APIs and have good knowledge of RESTful designpatterns, designing and developing large scale enterprise-grade distributed systems andapplications.● In-depth knowledge & understanding of network protocols (like TCP, HTTP and etc) andREST conventions.● Experienced in on-job queues and message queue, unit test frameworks and peer codereview.● Proficient in development tools (Git, Bitbucket, JIRA) and agile practice.● Possess in-depth understanding of database technologies (SQL and NoSQL).● Experienced in at least one Relational/Non-relational database technologies (MySQLPostgreSQL, MongoDB, Cassandra etc).● Work experience with Nginx / Apache2.● Work experience in real-time application.● Experience with Continuous Integration and Delivery● Working experience with graphQL.● Knowledge on Redis.● Knowledge of other server-side languages (PHP, Python, RoR)● Knowledge on any one of AWS/Azure/Google Cloud.
Qualification: Should have proven experience in defining DevOps strategy and best practices for digital platforms Someone who lives and breathes DevOps culture and truly understands what value a mature DevOps implementation brings to a business Should have knowledge of DevOps best practices and the full software development life cycle, including source control management, build processes, CI/CD, and installation Strong team building and mentoring skills and proven experience in this area Good understanding of security and privacy frameworks and regulations like Soc2 and GDPR and able to steer the company towards compliance Must have worked on automating everything while taking advantage of Pipelines/Infrastructure-as-Code ensuring continuous improvements around Release Cycles Should possess solid debugging skill of network, understanding of protocol such as HTTP, SSL, DNS, etc and should be able to mentor team members Familiarity with building/deploying Microservices applications and Serverless solutions Experience integrating static/dynamic code analysis tools into the build pipelines. Experience working in cross-functional Agile/Scrum teams. Excellent verbal and written communication with a proven track record of collaborating cross-functionally. Our projects are mostly on AWS wherein we leverage services like services like VPC, EC2, ELB, Elastic Beanstalk, Route53, RDS, S3, EFS, Elastic Cache, and more. Experience with these services is highly desirable. Even similar experience on other Cloud platforms like Azure and GCP with the proclivity to explore and learn AWS can also be considered Experience with containerized solutions using Docker, Kubernetes, ECS, EKS is highly desirable Experience in securing production environments and establishing security best practices for DevOps implementations Experience with GitHub/GitLab/BitBucket, JENKINS, SONAR, NEXUS, Ansible, JIRA, VPC Peering, Monitoring Tools/Solutions Responsibilities: Adopt and implement best practices and champion an engineering culture emphasizing Agile and DevOps The person will help us define the DevOps strategy for new projects The person will be responsible for the overseeing DevOps implementation across multiple projects. Technically guide, direct and mentor a team of DevOps engineers, who will be working with individual projects Define DevOps Architecture and implementation approach Lead capacity planning exercises to ensure that the infrastructure is tuned to optimize cost and performance Troubleshoot production issues and steer them to resolution along with other teams Improve deployment processes to ensure zero downtime and simple rollback protocols for all releases Build/improve monitoring and alerting solutions. Manage multiple competing priorities in a fast-paced, exciting, collaborative environment Collaborate with development team on managing all environments including dev, QA, staging and production environments across projects Help align the road map based on customer and company desires. Work closely with Delivery Teams, Engineering Teams to standardize development, maintenance and deployment of code across multiple implementations. Create and maintain a secured product environment and do checks on vulnerabilities
About Company: ACKO General Insurance: Acko is India’s first online-only insurance company focused on independent general insurance with its entire operations offered through the digital platform. Being India’s first online-only insurtech player Acko plans to launch low-cost, small-ticket insurance products based on data analytics. Acko plans to create products and deliver opportunities in areas where there are gaps such as personalized insurance products based on user consumption behaviors. Distribution will be key for Acko as it competes with the large insurers which have massive reach. Acko will bring out multiple product lines, offer bespoke pricing and products catering to specific segments and create products for the internet economy. Low cost products for the rural sector will also be key over a period of time, delivered via technology. Website: https://www.acko.com/ We are a Series C funded company and backed by a marquee set of investors like Mr. Binny Bansal, Amazon, Accel, SAIF, Catamaran etc. Our total funding till date is $143 Million. ounding Team: Varun Dua, Founder and CEOAmit Upadhay, CTO (Ex- BrowserStack, CoverFox, Alumni IIT Bombay)Deepak Angrula, VP Engineering (Ex-CoverFox, Alumni IIT Bombay)Vaibhav Shah, VP Engineering (Ex-Tesco, Ola , Alumni IIT Bombay) Job Description: Here’s what you'll do: Lead and mentor a team of 6-10 talented engineers. Develop, test, and deploy features across the entire stack alongside your teammates. Responsible for hiring, mentoring and development of our team of engineers to grow a high-performance engineering organization. Collaborate with product & technology teams to deliver major initiatives that drive our customer growth. Guide engineering technical strategy and roadmaps. Commitment to high code quality and delivery requirements. Here's what we're looking for : Minimally 8 years of work experience on large-scale and high-traffic projects, preferably in consumer-focused product startups. Minimally 2 years of hands-on technical leadership or people management. Hands on experience working with Web Back-end. Strong hands on coding, design, and architectural skills. Comfortable with ambiguity and rapid change: excited about pushing out lots of code quickly and constantly iterating. Experience with hiring and building self-sustaining teams. Thorough and methodical: you enjoy working with data and using it to make informed business decisions. Empathetic and user-focused: you care deeply about the product experience, you understand users’ motivations and frustrations, and you genuinely want to help them. Empathetic and employee-focused: you care deeply about the people in your team,helping them to achieve their professional goals, and enabling them to do the best work of their lives.
What You'll do: Drive the architecture of our application platform, considering the team and our future product road map. Drive and uphold high engineering standards, bringing consistency to the code, bases you encounter and ensuring software is adequately reviewed, tested, and integrated. Design and develop production-ready APIs and algorithms at scale Design, develop, process and analyse data collections and processing from diverse sources, large scale structured and unstructured data Build new features for internal and external users, and refactor existing ones to make them better. Drive optimization, testing and tooling to improve quality of solutions What makes you a great fit: Experience using one or more of the following languages: Python (Preferred), SQL, R. Experience with database and SQL (AWS Redshift (Preferred), Postgres, SQL Server) Demonstrated expertise working with and maintaining open-source data analysis platforms, including but not limited to: Pandas, Scikit-Learn, Matplotlib, TensorFlow, Jupyter and other Python data tools Spark (PySpark), HDFS, Kafka and other high-volume data tools Search Analytics through Elastic Search. Expert knowledge of distributed computing, optimization techniques and multiprocessing design principals using Python Experience with NoSQL and streaming platforms, e.g. Kafka, MongoDB, Neo4j is a plus Experience with advanced analytics and modern machine learning techniques is a plus Experience in healthcare industry with healthcare data analytics products
What you'll do: Design and implement build, deployment, and configuration management Test implemented designs Handle code deployments in all environments Monitor metrics and develop ways to improve Brainstorm for new ideas and ways to improvement development delivery Consult with peers for feedback during testing stages Build, maintain, and monitor configuration standards Maintain day-to-day management and administration of projects Document and design various processes; update existing processes Improve infrastructure development and application development What makes you a great fit: Strong experience in Linux/Unix Administration Solid understanding of containers and Container Orchestration tool Good experience with Docker and Kubernetes Solid understanding of TCP/IP, load balancing clusters, server load balancing, firewalls. Solid understanding of one or more Cloud Systems such as AWS or GCP. Good experience with Configuration Management experience such as Ansible, Chef,Puppet, or similar is preferred. Good Ability to present and communicate the architecture in a visual form. Good experience managing production infrastructure with Terraform, CloudFormation or similar. Good Experience with build management and continuous integration tools ( Jenkins) Good scripting experience: Python preferred.
Job Summary:● Develop / Design effective and scalable solutions to administrate Data clusters, large-scale operations, and infrastructure systems.● Architect systems, infrastructure and platforms using Linux and Amazon web services to support applications.● Own and deliver the implementation of new methods for systems, deployment, monitoring, management, and automation.● Technical depth – Exposure to a wide variety of problem-solving skills and respective automation.● Devise schemes to transfer, monitor, and verify terabytes of data are moved from diverse locations, securely and reliably.● Real-time problem diagnosis/resolution on live systems● Monitor grid health and performance, use critical thinking to find areas for improvement, develop monitoring framework and metrics in order to predict system behaviour pro-actively and take appropriate steps.● Capacity planning(cloud), provisioning new resources, ability to understand various capacity parameters and its cardinality● Infrastructure and platform security.● Infrastructure and platform cost management.● participate in on-call rotation using pagerduty Experience Needed:● Minimum 3+ years experience in DevOps role:● In-depth Linux/Unix knowledge, good understanding the various Linux kernel subsystems (memory, storage, network etc).● Amazon Web Services 3. DNS, TCP/IP, Routing, HA & Load Balancing. Configuration management using tools like Ansible, Salt● HA and Load balancing using tools like the Elastic Load Balancer and HAProxy.● Monitoring tools like sensu, services like datadog, new relic● Log management tools like Logstash/Syslog/ElasticSearch or similar.● Metrics collection tools like Ganglia, Graphite, OpenTSDB or similar.● Good understanding of distributed systems like kafka, zookeeper● Good understanding of building immutable infrastructures using packer, terraform● automation experience using python/ruby/go● Good understanding of Linux containers(docker, coreos) and orchestration technologies like Kubernetes , docker swarm.