1. Sets up monitoring on applications and infrastructure and dashboards 2. Provide emergency response by being on-call during US hours or by reacting to monitoring alerts 3. Work in US time zone for up to 4 months in a year actively monitoring systems 4. Proposes ideas implements automation to improve efficiency of repetitive infrastructure tasks 5. Actively looks for opportunities to improve the availability and performance of the system by applying the learnings from monitoring and observation
About Talview Talview helps enterprises beat Hiring Lag and engage great candidates faster with the world's first AI-led Instahiring platform. Hiring Lag cripples’ businesses when open positions lie vacant adversely impacting a company's revenue, operations, and quality of hire. The best candidates today are in the market for only 10 days, and 6 of 10 candidates drop out of the hiring funnel when the process is too long. Talview empowers businesses to achieve a 100% digital hiring process with a parallel “one-click” high-quality candidate experience from the first interaction to final selection. Our clients include Adecco, Airbnb, Bajaj Allianz, Cognizant, Deloitte, HCL, Sephora, and Unicef. Learn how we’re beating Hiring Lag at www.talview.com Lead DevOps Engineer We are looking for a DevOps Engineer who would be responsible to develop and manage end to end DevOps automation solution spanning from infrastructure provisioning, code build and deployment pipelines for microservice based applications. This position will be reporting to DevOps Lead. An ideal candidate would have: 5+ years of experience in understanding of development practices, awareness of leading cloud technologies/trends to formulate new DevOps product catalog, devise deployment workflow and strategies, integrate devtools for static and dynamic code analyses. 3+ years of experience writing scripts for Azure or AWS deployments. 1+ years of experience using Kubernetes. Infrastructure provisioning tools expertise in a few tools such as Docker, Chef, Puppet, Ansible, Packer, CloudFormation, Terraform. Experience with application servers, web servers, and databases (Nginx, PostgreSQL, MongoDB, HA Proxy, Tomcat, Flash Media Server/ Red5, redis, elasticache etc.) Culture- Talview's culture is rooted in Collaboration, Commitment to excellence, Credence, and Customer-centricity (our 4Cs). The People: We have Off-Roaders, Bikers, Table Tennis Players, Cricketers, Runners, Video Game champs, Musicians, and almost all varieties of people in the team. We have Great Family people along with Happy Singles. You will surely find a buddy here once you join us and you can bring your talented buddies too! Working at Talview has its perks Following are some benefits you can expect from us in return! If there’s something important to you that’s not on this list, talk to us! :) Extraordinary chance to scale growth Competitive salary Fully stocked pantry with healthy fruits, snacks, gourmet coffee 5 Days working & Flexible Work culture Whatever equipment helps you get your job done Monthly team lunches and annual team building events Team gatherings and company parties!
Key Responsibilities : - Leverage the batch computation frameworks and our workflow management platform (Airflow) to assist in building out different data pipelines - Lower the latency and bridge the gap between our production systems and our data warehouse by rethinking and optimizing our core data pipeline jobs - Work with client to create and optimize critical batch processing jobs in Spark - Develop production grade code using Scala/Spark and Python/Spark code on Azure data bricks Skills and Experience : - Strong engineering background and interested in data - Good understanding of data analysis using SQL queries - Strong hold on Python or Scala as a programming language on Azure Databricks. - Experience of developing and maintaining distributed systems built with Azure Databricks or native Apache Spark - Experience of building libraries and tooling that provide abstractions to users for accessing data - Experience in writing and debugging ETL jobs using a distributed data framework (Spark/Hadoop MapReduce etc.) on Azure Databricks - Experience optimizing the end-to-end performance of distributed systems - Ability to recommend and implement ways to improve data reliability, efficiency, and quality.
If you want to build a data-intensive AI SaaS product and take ownership of engineering highly scalable modules, we'd like to talk to you. We are a young team working on an exciting product called CustomerGlu. CustomerGlu enables growth teams at eCommerce companies to *save money with offer programs that convert*. We are currently looking for Backend Developers who can be part of our journey in building this product and scaling it up. We don’t judge based on work experience/education, just the skills and other traits that fit into our culture. Responsibilities: • Design and develop new backend modules.• Build RESTful API using NodeJS.• Integrate third-party APIs and SDKs.• Maintain and document existing backend modules.• Build data intensive applications using MongoDB, Cassandra and Spark.
Workdays: Monday to FridayWeekends offDay ShiftJP Nagar, Bangalore Job Description:•Strong hands-on knowledge on Spark (with Python as language) with over 8+ years of experience• End to end implementation experience in data analytics solutions (data ingestion, processing,provisioning and• visualization) for large scale and complex environments• Strong experience in Azure ecosystem such as HDInsight, Azure Data Factory, Azure Data Lakeand SQL DW and• ADLS2• Strong SQL and Shell script knowledge• Hands-on experience developing enterprise solutions using designing and building frameworks,enterprise patterns,• database design and development• End to end Cloud solution on Azure (Azure SQL DW, Azure Data Factory, HDInsight, SQL onAzure)• Batch solution and distributed computing using ETL/ELT (Spark SQL Spark Data frame ADF)• Implementation of data encryption at rest and in transit• DWBI (MSBI, Oracle, SQL Server), Data modeling, performance tuning, memory optimization DBpartitioning• Frameworks, reusable components, accelerators, CI/CD automation• Mentor and lead a data engineering teams to design, develop, test and deploy high performancedata analytics solutionsPrimary Skills: Azure (Azure SQL DW, Azure Data Factory, HDInsight, SQL on Azure), pyspark and Spark
Our current operations research product is deployed at some of the largest organizations in the world. This role will be responsible for re-architecting the existing solution and adding components of machine learning and intelligence to its logic. We are looking for a passionate individual who loves technology and is willing to design and create a flexible long-lasting product architecture.
Only azure Certified professionals1. Azure/Infra Architect Responsibilities Candidate must have demonstrated experience of migrating solutions to the Azure cloud platform Assess, analyze current infrastructure footprint (compute, storage, network) and complete requirement gathering (HW, SW) to move individual applications to Cloud Complete high level and low level design for cloud infrastructure, databases and other required components. Build and deploy infrastructure in cloud Migrate OS, Database and application from on-premise to Cloud Apply technical knowledge and customer insights to create application modernization roadmap and architect solutions to meet business and IT needs, ensuring technical viability of new projects and successful deployments, orchestrating key resources and infusing key application development and DevOps technologies (e.g. App Service, containers, serverless, cloud native, Java/node.js, DevOps and OSS tools) Connect with Client team to remove key blockers Qualifications Must have at least 1 Azure certification (Administrator, Developer or Architect) Must have 5+ experience in Cloud and Data center migration project as solution architect Must have in-depth understanding of compute, storage, network components including backup, monitoring and DR environment requirements Experience and understanding of large-scale applications portfolio in enterprise-wide environment (including migration of on-premise workloads to the cloud) required; Deep domain expertise of cloud application development solutions (e.g. IaaS, Serverless, API Management), container orchestration (e.g. Kubernetes, Cloud Foundry), continuous integration technologies (e.g. Jenkins, Spinnaker, Azure for DevOps, Chef, Puppet), web application server technologies, cloud application design, software architecture and practices (design/development/deployment, Agile, SCRUM, ALM), breadth of technical experience, and technical aptitude to learn and adjust to new technologies and cloud trends required; Experience and understanding of large-scale application development projects (including key coding skills and practices) required; 2. Azure DevOps Engineer Responsibilities: Candidate will implement automation solutions on Azure using open source tools and technologies (e.g. Ansible, Jenkins, Chef) in any of the following technology tiers/products: Unix/Linux Microsoft Windows Server Oracle Database Middleware (IBM WebSphere, JBoss) VMware Candidate must have demonstrated experience of migrating solutions to the Azure cloud platform Candidate will provide expert level of automation solution support. Perform as primary Systems Administrator in a large enterprise environment. Perform Patch management tasks to include: maintaining current knowledge of available patches, deciding what patches are appropriate for particular systems, ensuring that patches are installed properly, testing systems after installation, and documenting all associated procedures Test new releases of products to ensure compatibility and minimize user impact. Recommend and implement system enhancements that will improve the performance and reliability of the system including installing, upgrading/patching, monitoring, problem resolution, and configuration management. Develop, document, and automate technical processes and procedures as needed. Adhere to strict Information Systems security guidelines in all cases. Qualifications: Must have at least 1 Azure certification (Administrator, Developer or Architect) Must have experience with one or more open source tools (Ansible, Chef, Puppet, Yamal, Parker, etc.). Hands on experience with change automation and orchestration. Proficient in scripting languages (e.g. Perl, Python, Bash, Ruby, etc.) Hand on experience with troubleshooting and diagnoses of hardware and software problems. Experience installing, configuring, and maintaining computer hardware and software in a large-scale enterprise environment. Excellent written, verbal, and interpersonal skills.
Responsibilities:1. Design, develop and deliver web services and jobs that powers Niki, India's first transactional bot.2. Write code that are clean, testable, performant, scalable, documented and secure3. Design and architect new sub-systems, idenity perf bottlenecks and suggest design/architect improvementsQualifications:1. Minimum of 4 years of software development experience prefrably developing e-commerce applications2. Bachelor’s and/or Master's degree in Computer Science or related field of studyPreferred Qualifications:1. Fluent in Java or any other object oriented programming languages2. Knowledge of Design patterns and design principles3. Experienced in delivering REST style web services 4. Significant experiences in asynchronous and multi-threaded programming5. Strong CS fundamentals with good hold on DS and Algo.Good to have:1. Knowledge of Spring2. Experience with AWS, Azure or Google Cloud
Role: The role of lead is not a textbook checklist, however, there are technical responsibilities that a team must fulfill and we expect the tech lead to ensure these responsibilities are covered and be able to cover them themselves if needed. We expect tech leads to taking a collaborative approach to leading their team. This is especially important when considering the amount of experience that each of our consultants brings. Given this, we expect you to: ● Respect the other members of your team and recognize you don’t always know best. ● Spot gaps in team capability and figure out how to fix them as a team. ● Be hands-on, able and willing to contribute to development, however, don’t expect to be coding all of the time. ● Encourage the team to be proactive, give them responsibilityResponsibilities ● Have a clear understanding of the deployment architecture ● Have a clear understanding of the build pipeline ● Understand how you get changes into production ● Understand how all parts of the system work together ● Facilitate technical communication with other teams, both within your engagement and across other EE clients. ● Actively seek to remove knowledge silos within the team ● Ensure you have a release / branching strategy in place ● Act as the primary point of contact for your team when communicating with other teams ● Ensure there is a technical vision for the team ● Liaise with environment specialists to ensure smooth deployments to production ● Encourage the team to follow good development practices aligned to EE technical values ● Feedback to the delivery lead or engagement manager on the quality of your team (good and bad) ● Recognize team members that have the potential to grow into team leads ● Ensure the use of new technologies or dependencies does not block the team. ● Ensure the team keeps necessary architectural documents up to date ● Keep an eye on the long term consequences of architectural choices, and remind others when necessary ● Build good relationships with your team members. Act as a mentor when required ● Keep the client informed and engaged in the technical side of the project ● Build relationships across your client communityTechnologies / Experience The successful candidate must have the following experience: ● Worked as the tech lead of a development/delivery team in a large organization ● Have worked with a variety of different technical architect roles ● Be deeply proficient in at least one programming language ● Be comfortable using other languages and evidence using multiple languages ● Have hands-on experience with some form of configuration management tooling, e.g. Ansible, Chef, Puppet ● Have hands-on experience of at least one continuous integration and continuous delivery technology, e.g. Jenkins, Go, Team City or Bamboo. ● Full-stack development experience from the user interface through to data persistence ● A strong proponent of XP practices such as TDD ● Working with a delivery team to formulate an automated test strategy ● Worked as part of a number of agile delivery teams and seen a number of different approaches to delivery ● Good appreciation of secure coding practices and end to end system security The following exposure will also be looked on favorably: ● Performing an ‘architect’ role, while retaining hands-on involvement ● Working with cloud hosting platforms such as AWS, Rackspace, Azure etc. ● Infrastructure management technologies such as Cloud Formation or Terraform
DevOps Engineer Position: Full time Base Location: Bangalore, with extensive and frequent travel to SEA R&R: ● Work with people across various levels right from delivery team level to top management ● Support internal product and external customers on multiple platforms ● Working with customer teams (specifically development team) to analyse their process, and environments to improve user satisfaction ● Improve client Devops team by enabling them with DevOps concepts, and processes. ● Act as the technical expert across multiple client projects helping them in enhancing their delivery pipeline , and overall DevOps and Agile practices ● Identify state of the art CI/CD tools, prepare decision proposals, implementation plans for these tools and carry out introduction & training, allowing client delivery to move faster. ● Bring in new and cutting-edge methodologies in DevOps both for Greyamp and for clients. Need to have: ● 3 - 5 years of experience of relevant experience (at least 1 year experience in development, and 2+ in Devops) ● Understanding of Linux/Unix Administration ● Understanding of one scripting language (python, shell, ruby or pearl) ● Experience working with different OS Servers (RedHat, Oracle, Microsoft) ● Strong understanding of version control (GIT) ● Good understanding of build tools like ANT or Maven or gradle ● Experience with setting up relevant dashboard using tools around code quality and vulnerability checks (SonarQube) ● Experience implementing and maintaining CI/CD pipelines (Jenkins or Circle CI or GoCD) ● Good understanding and working experience with Docker Containers ● Experience working with configuration management tools(chef, puppet or ansible) ● Extensive knowledge on working with cloud platform and maintaining automation scripts using terraform or similar tools. ( preferably AWS or Azure or GCP ) ● Basic knowledge on working with the cloud product suite ( preferably AWS or Azure or GCP ) ● Understanding and experience working on Saas architecture ● Understanding and experience with VMware or other virtual environments. Nice to have: ● Experience working with Testing automation tools (Preferably Selenium) ● Knowledge and experience with Kubernetes ● Understanding and experience with Monitoring tools like ELK stack, prometheus and grafana, Nagios, or Dynatrace ● Experience working with DataBase deployments ● Client handling experience and stakeholder management
DevOps Specialist Dunya Labs is a deep tech product company currently focused on building infrastructure, developer tooling and middleware for deploying scalable blockchain applications. We combine a theoretical research team with a product team to lead cutting-edge developments in the blockchain space. Seeking DevOps Specialist for designing and supporting our infrastructure environments. Key Responsibilities: ● Support of our continuous integration processes that run on various platforms ● Design entire application environments that can be fully automated or replicated including network, compute, and data stores ● Develop solutions by working with product manager, scrum master, architects, developers and business stakeholders ● Create and enhance of Continuous Integration automation across multiple platforms including Java, Nodejs, and Swif. ● Create and enhance of Continuous Deployment automation built on Docker and Kubernetes ● Create and enhance of dynamic monitoring and alerting solutions using industry leading services ● Develop automation to ensure security across a geographically dispersed hosting environment ● Leverage new technology paradigms (e.g., serverless, containers, microservices) Qualifications: ● Bachelor's or Masters degree in Information Systems, Information Technology, Computer Science or Engineering or equivalent experience ● Good exposure to Agile software development and DevOps practices such as Infrastructure as Code (IaC), Continuous Integration and automated deployment ● Expertise with Continuous Integration and Continuous Delivery (CI/CD) ● Architecting, designing and developing applications on PCF ● Designing and building application and serverless technologies ● Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability ● Strong communication and analytical/problem-solving skills, should be a team player Experience: ● Managing large production environment in Cloud & Operational Support ● Automating infrastructure deployment, management and monitoring ● Hands-on with Devops technologies ● Troubleshooting and Problem solving in Infrastructure issues Skills and Competencies: ● Linux/Unix Administration ● Infrastructure Automation with Chef/Puppet/Ansible/Terraform ● Cloud Operations & Services - AWS/GCP/Azure (AWS preferable) ● Networking - Unix Networking and Understanding of Cloud Networking ■ AWS VPC, NAT, Firewalls, Subnets etc ■ TCP/IP and HTTP(S) protocols ● Source Code Control : github / gitlab ● CI/CD : Jenkins ● Container Technologies : Docker and Kubernetes ● Monitoring Tools : Nagios/Cloudwatch/Prometheus ● SQL and NoSQL Database : Mysql/Postgresql and Mongodb ● Scripting through Python/Ruby/Shell ● Softwares : Apache Web Server, Tomcat, Nginx, Kafka, Zookeeper etc ● Security : Https/TLS, Certificates, Digital Signature, VPN, Firewalls, AWS Security Group, IAM, DMZ Architecture Dunya Labs is an equal opportunities employer and welcomes applications from all sections of society and does not discriminate on grounds of race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, or any other basis as protected by applicable law.
Job Title: DevOps Engineer Work Experience: 3-7 years Qualification: B.E / M. Tech Location: Bangalore, India About Pramata Pramata’s unique, industry-proven offering combines the digitization of critical customer data currently locked in unstructured and obscure sources, then converts that data into high-quality, actionable information accessible through one or multiple applications through the Pramata cloud-based customer digitization platform. Pramata’s customers are some of the largest companies in the world including CenturyLink, Comcast Business, FICO, HPE, Microsoft, NCR, Novelis, and Truven Health IBM. Pramata has helped these companies and more find millions of dollars in revenue, ensure regulatory and pricing compliance, as well as enable risk identification and management across their customer, partner, and even supplier bases. Pramata is headquartered near San Francisco, California and has its Product Engineering and Solutions Delivery Center in Bangalore, India. How Pramata Works Pramata extracts essential intelligence about customer relationships from complex, negotiated contracts, simplifies it from legalese into plain English, synthesizes it with data from CRM, CLM, billing and other systems, and delivers it in the context of a particular user’s role and responsibilities. This is done through Pramata’s unique Digitization-as-a-Service (DaaS) process which transforms unstructured and diverse data into accurate, timely and meaningful digital information stored in the Pramata Digital Intelligence Hub. The Hub keeps the information centralized as one single, shared source of truth along with ensuring that this data remains consistent, accessible and highly secure. The opportunity - What you get to do You will be instrumental in bringing automation to development and testing pipelines, release management, configuration management, environment & application management and day-to-day support of development teams. You will manage the development of capabilities to achieve higher automation, quality and performance in automated build and deployment management, release management, on-demand environment configuration & automation, configuration and change management and in production environment support - Application monitoring, performance management and production support of mission-critical applications including application and system uptime and remote diagnostics - Security - Ensure that the highly sensitive data from our customers is secure at all times. - Instrument applications for performance baselines and to aid rapid diagnostics and resolution in case of system issues. - High availability and disaster recover - Build and maintain systems that are designed to provide 99.9% uptime and ensure that disaster recovery mechanisms are in place. - Automate provisioning and integration tasks as required to deploy new code. - Monitoring - Proactive steps to monitor complex interdependent systems to ensure that issues are being identified and addressed in real-time. Skills required: - Excellent communicator with great interpersonal skills, driving clarity about the intricate systems - Come with hands-on experience in application infrastructure technologies like Linux(RHEL), MySQL, Apache, Nginx, Phusion passenger, Redis etc. - Good understanding of software application builds, configuration management and deployments - Strong scripting skills like Shell, Ruby, Python, Perl etc. Comes with passion for automation - Comfortable with collaboration, open communication and reaching across functional borders. - Advanced problem-solving and task break-down ability. Additional Skills (Good to have but not mandatory): - In depth understanding and experience working with any Cloud Platforms (e.g: AWS, Azure, Google cloud etc) - Experience using configuration management tools like Chef, Puppet, Capistrano, Ansible, etc. - Being able to work under pressure and solve problems using an analytical approach; decisive, fast moving; and a positive attitude. Minimum Qualifications: - Bachelor’s Degree in Computer Science or a related field - Background in technology operations for Linux based applications with 2-4 years of experience in enterprise software - Strong programming skills in Python, Shell or Java - Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet - Experience with one or more of the following Databases: PostgreSQL, MySQL, Oracle, RDS
Security Content Developer * As a security content author, this role involves hands on security and compliance stuff. Prior experience with security tools (exploit development, port scanner and so on), scanner (OVAL, SCAP, Nessus, OpenVAS) would be a plus. As a member of security content team, you will be asked to develop and manage the security content, assume the full responsibility towards handling the content quality for the cloud services, with your value added knowledge that comes from your prior experience. You'll be asked to adhere to Redlock standards and procedures while developing the content. * Well known exposure to common vulnerabilities and knowledge on vulnerability common standards such as CVE, CVSS and CCE. * Expertise in authoring/mapping content for various security compliance standards both including regulatory (PCI, HIPAA, SoC2, SoC3, GDPR so on) and standard compliance frameworks such as NIST-800-53, CIS and so on is a must. * This role requires that, you've prior experience and hands on cloud services AWS, Azure and GCP (one of the 3 at least). * Expertise towards remediation of vulnerabilities or compliance (misconfiguration alerts), both via procedural and CLI methods.
SENIOR C# DEVELOPER At The NDL Group we support big brands, media owners and agencies, delivering global promotion & rewards programmes, designed to wow their staff and customers. We are headquartered in London and have been in business for more than 20 years. Using our proprietary technology system, Promotigo™, we deliver increased efficiency across multi-territories through a single platform. Promotigo™ handles complex procedures such as high volume code verification or global cashback payments, while providing real-time accountability and measurement. NDL has been behind some of Europe’s biggest and best-known promotional campaigns. With clients as diverse as McDonald’s, Universal, XBOX and Nestle, we have built a firm reputation for delivering successful promotional strategies, underpinned by reliable technology platforms, inspirational prize content and 5 star winner fulfillment. AS A SENIOR DEVELOPER, YOU WILL BE RESPONSIBLE FOR... ● Develop our core web application using service-oriented architecture exposing APIs for internal and external clients. ● Implement architecture and design patterns to help ensure that systems scale. ● Perform unit and integration testing before launch ● Establish processes and best practices around development standards. ● Review product requirements in order to give development estimates and product feedback. ● Apply technical expertise to challenging architecture and design problems. SKILLS / COMPETENCIES: ● 6+ years experience in developing enterprise-grade web applications with C# / .NET and SQL Server. ● Experience with software design patterns like MVC, MVVM, etc. ● Knowledge in frontend development (HTML, CSS, JS, Angular) is an added advantage. ● Experience building applications using service-oriented architecture and APIs. ● Hands-on experience on Microsoft Azure is an added advantage. ● Strong English communication skills, both written and spoken. ● Ability to work and communicate clearly and efficiently with team members. If you are a big fish that wants to swim with other big fish in a fast-growing company, joining The NDL Group might be your best next career move. Contact us now!
Should have good knowledge in windows Azure