Senior Backend Engineer(Python/ DevOps) - Recreate AI Location: Bangalore About the Company In the age of TicTok and Instagram, we at Recreate AI are using AI to change the way videos are created online. We are helping businesses create video content from plain text and are on the mission to democratize video making. We are an early stage startup backed by the World's leading Talent Investor, Entrepreneur First. Responsibilities As the senior backend engineer and a founding engineer you will be responsible for leading the entire tech team with the opportunity to grow into the company’s engineering manager and govern the workings of the entire engineering team. You will primarily be responsible for the backend (Django) and Devops (AWS) Requirements 4 - 6+ years of experience in Engineering (preferably in startups) You should have at least built 1 full product and pushed it to customers. Working knowledge Backend technologies along with DevOps At least 3+ years of experience in Django (If you’ve used other stacks, then the ability to quickly move code and manage it) Proficient in using RESTful APIs Experience with Version Control Software like git. Experience in managing deployments on AWS (EC2/ BeanStalk) Bachelor’s degree or more. Preferred (Not Required) Ability to work in a fast paced environment with constantly changing requirements. Experience in working in early stage startups Experience with basic Machine Learning (The more the better, primarily in NLP/CV) Degree from Tier 1 colleges in India or equivalent. Have a life but willing to work those extra hours once in a while. A good sense of humor.
Responsibilities and Duties Gathering functional requirements, developing technical specifications, and project & test planning Lead end-to-end efforts to design, develop, and implement data movement and integration processes in preparation for analysis, data warehousing, or operational data stores. Troubleshoot, optimize, and tune performance of ETL processes and analytics queries Create new metrics and develop tools for monitoring and reporting Act in a technical leadership capacity: Partner with other team members to apply technical expertise to challenging programming and design problems Roughly 85-90% hands-on coding Resolve defects/bugs during QA testing, pre-production, production, and post-release patches Provide post production support. Work cross-functionally with various Intuit teams: product management, various product lines, or business units to drive forward results Experience with Agile Development and SCRUM methodologies Qualifications and Skills 8+ years experience developing web application, REST services, and backend data integrations in AWS Strong experience with Java, SQL, AWS, Hive, and RDBMS systems is mandatory. Experience creating data models, and building data integrations for tools like Eloqua. Experience building complex software programs and applications for acquisition, processing, and management of massive quantities of data using Java is a MUST. Develop and implement algorithms for data processing and manipulation tasks (e.g., cleaning, parsing, sorting, ranking) Experience with the development challenges inherent with highly scalable and highly available web applications and backend systems. Experience creating and developing complex queries in Hive using SQL Solid communication skills: Demonstrated ability to explain complex technical issues to both technical and non-technical audiences Strong understanding of the Software design/architecture process Experience with the entire Software Development Life Cycle (SDLC) Experience with unit testing & Test Driven Development (TDD)** Preferred Experience: Experience with marketing technologies and the marketing tech stack. Knowledge of no-SQL systems such as Mongo, Cassandra is a plus.
We are hiring for Senior Data Engineer for BengaluruResponsibilities and Duties 5-8 years of experience building complex software programs and applications for acquisition, processing, and management of massive quantities of data (big data) using high-level programming languages (e.g., JAVA, C++, Python) Expertise in Eloqua Marketing Cloud Campaign Management, Emails, Landing Pages, Programs, CDO and End-to-End Integration of external data into Eloqua. Lead end-to-end efforts to design, develop, and implement data movement and integration processes in preparation for analysis, data warehousing, or operational data stores. Collect, process and interpret large data sets, and identify and extract features of interest using methods such as aggregation and filtering Develop and implement algorithms for data processing and manipulation tasks (e.g., cleaning, parsing, sorting, ranking) Exercise judgment in selecting methods and techniques to design, develop, and implement software tools and processes to extract, transfer, and load raw data or pre-processed data into relational and NoSQL databases or data warehouses Troubleshoot, optimize, and tune performance of ETL processes and analytics queries Assist in data model documentation, data dictionary, data flow, and data mapping for end users Create new metrics and develop tools for monitoring and reporting Participate in complete end-to-end data engineering project work, including design, reviews, development, unit test, and deployment Expertise with SQL - Create and develop complex queries in Hive using SQL Extensive working experience with Hive and Vertica. Intermediate knowledge of AWS Qualifications and Skills Advanced Python Skills Data Engineering ETL and ELT Skills Expertise on Streaming data Experience in Hadoop eco system AWS Skills (S3, Athena, lambda) or any Cloud Platform. Interested to apply? Please revert with updated resume & below details Total Experience in IT Relevant Experience Current CTC Expected CTC Notice Period Current Location
About Dbaux DBAUX provides the next generation cyber security and mobile products for the large Enterprise, Government and Critical Infrastructure Security markets and is seen as one of the most trusted products in this segment. DBAUX has expanded its portfolio to cover a spread of niche Product R&D Services and is considered as one of the leaders in Performance & Scale, Big Data Consulting and Customized OS development among others. You will enjoy working with us if you are looking to work in a stimulating, flexible, open environment. You will experience ownership and independence at the same time providing technical solutions that work. If you would like to know more about us, please visit our website: www.dbaux.com Title : Dev Ops Sr. Engineer Exp : 5 - 8 Yrs Any 1 certification mandatory - AWS sysops Certification REDHAT(RHCE) Certified Details :- Good expertise on Linux (preferably CENTOS / UBUNTU) and hands-on in one of the scripting languages (Ruby, Python, and UNIX Shell scripts). Proficient in debugging Python/Java related applications and understand the core concepts around it. Proficiency in working with Web/App Servers like Apache, Tomcat, and NGINX. Strong knowledge of Database - RPS ,MongoDB, Redis. Clear working knowledge of Subversion & GIT, writing pre-commit scripts and maintaining multiple branches (updating, creating, merging, etc.,). Experience of Continuous Integration build systems (e.g. Cruise control, Jenkins, Hudson, etc.,). Hands-on experience in managing AWS resources, review and recommend automation and build cost efficiencies. Experience in Containers-ECR. , Kinesis Streams/Data pipeline & Monitoring. CI / CD (Jenkins, BitBucket, GitHub). Scripting Languages (Bash, Python) Configuration Management (Salt/Puppet/Chef) Release Management tools. Automation of deployment scripts of the web application, DB & Infrastructure . Ability to work alternate shift schedules and 24x7 on-call schedule, if needed. Strong & Effective Communication Skills. Should be able to interface with large delivery teams and solve the day-to-day environmental and build issues that are reported. Resolve issues queue in accordance with cloud services standards to ensure cases are resolved in a timely manner Ability to work well in a team environment and independently while tackling complex problems. Ability to learn and adapt new technologies and infuse them into the current eco-system. Education : Preferrably B.E/B.TECH
Looking for a full stack developer who will hold the responsibility for the following: Build highly scalable web apps from scratch Maintain and extend existing code base Maintain proper documentation of the code base
Job Title: Full Stack Developer Location: Bangalore Purpose: The person in this position would be responsible for backend integration of Deep learning algorithms, creating dashboards for clients. Roles & Responsibilities: Demonstrates a growth mindset, seeks feedback often and is effective in continuous personal and professional development Provides expertise in all phases of the development lifecycle from concept and design to testing Defines the architecture, best practices and coding standards for the product development team Supports continuous technical improvement by investigating alternatives and technologies and presenting these for architectural review Motivates team members and extends goodwill to other employees while having fun! Job Requirements: 2+ years of software industry experience Strong Expertise in JS, PHP, React, Node, Angular2+, MySQL, PostgreSQL Solid understanding of software design, development, testing, and problem-solving Expertise in coding efficient, high quality and modularized software Experience in developing Web services - Rest/Soap APIs/HTTP API - Microservices Experience setting up and managing servers. Devops experience is a big plus Strong exposure on Database like RDBMS - Postgres DB / NoSQL DB like DynamoDB, Elasticsearch Experience in Cloud / Storage like Amazon (AWS) - EC2/EBS/S3 Expertise in test automation Familiarity with Unix shell and source control systems and tools such as git Strong technical leadership skills Comfortable collaborating with designers, front-end developers and other team members Strong communication skills Technical Coaching and mentoring skills Understanding of machine learning, natural language processing is a plus.
Mandatory: Docker, AWS, Linux, Kubernete or ECS Prior experience provisioning and spinning up AWS Clusters / Kubernetes Production experience to build scalable systems (load balancers, memcached, master/slave architectures) Experience supporting a managed cloud services infrastructure Ability to maintain, monitor and optimise production database servers Prior work with Cloud Monitoring tools (Nagios, Cacti, CloudWatch etc.) Experience with Docker, Kubernetes, Mesos, NoSQL databases (DynamoDB, Cassandra, MongoDB, etc) Other Open Source tools used in the infrastructure space (Packer, Terraform, Vagrant, etc.) In-depth knowledge on Linux Environment. Prior experience leading technical teams through the design and implementation of systems infrastructure projects. Working knowledge of Configuration Management (Chef, Puppet or Ansible preferred) Continuous Integration Tools (Jenkins preferred) Experience in handling large production deployments and infrastructure. DevOps based infrastructure and application deployments experience. Working knowledge of the AWS network architecture including designing VPN solutions between regions and subnets Hands-on knowledge with the AWS AMI architecture including the development of machine templates and blueprints He/she should be able to validate that the environment meets all security and compliance controls. Good working knowledge of AWS services such as Messaging, Application Services, Migration Services, Cost Management Platform. Proven written and verbal communication skills. Understands and can serve as the technical team lead to oversee the build of the Cloud environment based on customer requirements. Previous NOC experience. Client Facing Experience with excellent Customer Communication and Documentation Skills
YOE: 1- 3yearsSkill: Python, Docker or Ansible , AWS➢ Experience Building a multi-region highly available auto-scaling infrastructure that optimizesperformance and cost. plan for future infrastructure as well as Maintain & optimize existinginfrastructure.➢ Conceptualize, architect and build automated deployment pipelines in a CI/CD environment likeJenkins.➢ Conceptualize, architect and build a containerized infrastructure using Docker,Mesosphere orsimilar SaaS platforms.Work with developers to institute systems, policies and workflows which allow for rollback ofdeployments Triage release of applications to production environment on a daily basis.➢ Interface with developers and triage SQL queries that need to be executed inproductionenvironments.➢ Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.➢ Assist the developers and on calls for other teams with post mortem, follow up and review ofissues affecting production availability.➢ Establishing and enforcing systems monitoring tools and standards➢ Establishing and enforcing Risk Assessment policies and standards➢ Establishing and enforcing Escalation policies and standards
Job Description We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources. Responsibilities Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure Skills Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills
Role and Responsibilities The candidate for the role will be responsible for enabling single view for the data from multiple sources. Work on creating data pipelines to graph database from data lake Design graph database Write Graph Database queries for front end team to use for visualization Enable machine learning algorithms on graph databases Guide and enable junior team members Qualifications and Education Requirements B.Tech with 2-7 years of experience Preferred Skills Must Have Hands-on exposure to Graph Databases like Neo4J, Janus etc.. Hands-on exposure to programming and scripting language like Python and PySpark Knowledge of working on cloud platforms like GCP, AWS etc. Knowledge of Graph Query languages like CQL, Gremlin etc. Knowledge and experience of Machine Learning Good to Have Knowledge of working on Hadoop environment Knowledge of graph algorithms Ability to work on tight deadlines
Roles & Responsibility: Designing, developing and maintaining Data platforms on cloud/on-prem Translate business needs to technical specifications and framework Maintain and support data analytics platforms & application. Perform quality assurance to make sure the data correctness Guide junior developers in their duties when needed. Recommend improvements to provide optimum solutions. Conduct training programs and knowledge transfer sessions to junior developers when needed. Qualifications/Education/Experience/Skills: Bachelors’ degree in Computer Science/Electronics or a related field 3+ years of experience in building data solution on cloud platforms like AWS, Azure, GCP. Must have orchestrated at least 2 projects using any of the cloud platforms (GCP, Azure, AWS etc.) is a must. Python/Java Hands - on Exp from data engineering perspective is a must Must have worked on either AWS, Azure, GCP, AWS – S3, RDS/Redshift, batch, cloud watch, IAM role Azure – BLOB, ADF, ADLS/SQL DW (Synapse Analytics)/AAS, AppLogic, App registration, Azure AD GCP – Cloud storage, Big Query, App Engine, Compute Engine, IAM role Experience in at least one of the major ETL tools (Talend + TAC, SSIS, Informatica, Pentaho) Experience with any of the object-oriented/object function scripting languages: Python, Java, Scala, Shell, .NET scripting, etc. is a must Must have worked in different data warehousing projects involving SCD1/2 dimension, transactional/aggregate/summarized/degenerate facts in Kimball’s model Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently. Takes initiative on improvements and testing results. Ability to handle multiple tasks and projects simultaneously in an organized and timely manner. Detailed oriented, with the ability to plan, prioritize, and meet deadlines in a fast-paced environment. Good presentation and documentation skills is a must
People Ready to attend at least one round of F2F interview only should apply.Job Description: - Linux: 4 or more years in Unix systems engineering with experience in Red Hat Linux, Centos or Ubuntu. - AWS: Working experience and a good understanding of the AWS environment, including VPC, EC2, EBS, S3, RDS, SQS, Cloud Formation, Lambda, and Redshift. - DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Ansible, Chef, Puppet, Jenkins, etc.). Puppet and Ansible experience a strong plus. - Programming: Experience programming with Python, Bash, REST APIs, and JSON encoding. - Version Control: Nice to have Git experience. - Testing: Be very familiar with CI / CD, good in scripting like Phyton, Unix, etc - Security: Experience implementing role-based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment. - Monitoring: Hands-on experience with monitoring tools such as AWS CloudWatch, Nagios or Splunk. - Backup/Recovery: Experience with the design and implementation of big data backup/recovery solutions. - Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB, HAProxy) and high availability architecture. - Ability to keep systems running at peak performance, upgrade the operating system, patches, and version upgrades as required - Implementation of Auto Scaling for instances under ELB using ELB Health Checks. - Work experience on S3 bucket. - IAM and its policy management to restrict users to particular AWS Resources. - Strong ability to troubleshoot any issues generated while building, deploying and in production support - Experience in Performance tuning, garbage management, and memory leaks, networking and information security, IO monitoring and analysis. - Experience in resolving escalated tickets, performance problems and creating Root Cause Analysis (RCA) document on Severity 1 issues. - Achieved disaster recovery implementations and participated in a 24/7 on-call rotation policy with another team member.