
Similar jobs
Job Description -
Role - Sr. Python Developer
Location -- Manyata Tech Park, Bangalore
Mode - Hybrid
Required Tech Skills:
- Experience in Python
- Experience in any Framework like Django, and Flask.
- Primary and Secondary skills - Python, OOPs and Data Structure
- Good understanding of Rest Api
- Familiarity with event-driven programming in Python
- Good analytical and troubleshooting skills
Job Description:
As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:
Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.
Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.
Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.
Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.
Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.
Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.
Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.
Skills and Qualifications:
Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.
Proficiency in designing and developing data pipelines and ETL processes.
Solid understanding of data modeling concepts and database design principles.
Familiarity with data integration and orchestration using Azure Data Factory.
Knowledge of data quality management and data governance practices.
Experience with performance tuning and optimization of data pipelines.
Strong problem-solving and troubleshooting skills related to data engineering.
Excellent collaboration and communication skills to work effectively in cross-functional teams.
Understanding of cloud computing principles and experience with Azure services.
About the Role
As a result of our rapid growth, we are looking for a Java Backend Engineer to join our existing Cloud Engineering team and take the lead in the design and development of several key initiatives of our existing Miko3 product line as well as our new product development initiatives.
Responsibilities
- Designing, developing and maintaining core system features, services and engines
- Collaborating with a cross functional team of the backend, Mobile application, AI, signal processing, robotics Engineers, Design, Content, and Linguistic Team to realize the requirements of conversational social robotics platform which includes investigate design approaches, prototype new technology, and evaluate technical feasibility
- Ensure the developed backend infrastructure is optimized for scale and responsiveness
- Ensure best practices in design, development, security, monitoring, logging, and DevOps adhere to the execution of the project.
- Introducing new ideas, products, features by keeping track of the latest developments and industry trends
- Operating in an Agile/Scrum environment to deliver high quality software against aggressive schedules
Requirements
- Proficiency in distributed application development lifecycle (concepts of authentication/authorization, security, session management, load balancing, API gateway), programming techniques and tools (application of tested, proven development paradigms)
- Proficiency in working on Linux based Operating system.
- Working Knowledge of container orchestration platform like Kubernetes
- Proficiency in at least one server-side programming language like Java. Additional languages like Python and PHP are a plus
- Proficiency in at least one server-side framework like Servlets, Spring, java spark (Java).
- Proficient in using ORM/Data access frameworks like Hibernate,JPA with spring or other server-side frameworks.
- Proficiency in at least one data serialization framework: Apache Thrift, Google ProtoBuffs, Apache Avro,Google Json,JackSon etc.
- Proficiency in at least one of inter process communication frameworks WebSocket's, RPC, message queues, custom HTTP libraries/frameworks ( kryonet, RxJava ), etc.
- Proficiency in multithreaded programming and Concurrency concepts (Threads, Thread Pools, Futures, asynchronous programming).
- Experience defining system architectures and exploring technical feasibility tradeoffs (architecture, design patterns, reliability and scaling)
- Experience developing cloud software services and an understanding of design for scalability, performance and reliability
- Good understanding of networking and communication protocols, and proficiency in identification CPU, memory & I/O bottlenecks, solve read & write-heavy workloads.
- Proficiency is concepts of monolithic and microservice architectural paradigms.
- Proficiency in working on at least one of cloud hosting platforms like Amazon AWS, Google Cloud, Azure etc.
- Proficiency in at least one of database SQL, NO-SQL, Graph databases like MySQL, MongoDB, Orientdb
- Proficiency in at least one of testing frameworks or tools JMeter, Locusts, Taurus
- Proficiency in at least one RPC communication framework: Apache Thrift, GRPC is an added plus
- Proficiency in asynchronous libraries (RxJava), frameworks (Akka),Play,Vertx is an added plus
- Proficiency in functional programming ( Scala ) languages is an added plus
- Proficiency in working with NoSQL/graph databases is an added plus
- Proficient understanding of code versioning tools, such as Git is an added plus
- Working Knowledge of tools for server, application metrics logging and monitoring and is a plus Monit, ELK, graylog is an added plus
- Working Knowledge of DevOps containerization utilities like Ansible, Salt, Puppet is an added plus
- Working Knowledge of DevOps containerization technologies like Docker, LXD is an added plus
Primary Responsibilities
- Understand current state architecture, including pain points.
- Create and document future state architectural options to address specific issues or initiatives using Machine Learning.
- Innovate and scale architectural best practices around building and operating ML workloads by collaborating with stakeholders across the organization.
- Develop CI/CD & ML pipelines that help to achieve end-to-end ML model development lifecycle from data preparation and feature engineering to model deployment and retraining.
- Provide recommendations around security, cost, performance, reliability, and operational efficiency and implement them
- Provide thought leadership around the use of industry standard tools and models (including commercially available models and tools) by leveraging experience and current industry trends.
- Collaborate with the Enterprise Architect, consulting partners and client IT team as warranted to establish and implement strategic initiatives.
- Make recommendations and assess proposals for optimization.
- Identify operational issues and recommend and implement strategies to resolve problems.
Must have:
- 3+ years of experience in developing CI/CD & ML pipelines for end-to-end ML model/workloads development
- Strong knowledge in ML operations and DevOps workflows and tools such as Git, AWS CodeBuild & CodePipeline, Jenkins, AWS CloudFormation, and others
- Background in ML algorithm development, AI/ML Platforms, Deep Learning, ML Operations in the cloud environment.
- Strong programming skillset with high proficiency in Python, R, etc.
- Strong knowledge of AWS cloud and its technologies such as S3, Redshift, Athena, Glue, SageMaker etc.
- Working knowledge of databases, data warehouses, data preparation and integration tools, along with big data parallel processing layers such as Apache Spark or Hadoop
- Knowledge of pure and applied math, ML and DL frameworks, and ML techniques, such as random forest and neural networks
- Ability to collaborate with Data scientist, Data Engineers, Leaders, and other IT teams
- Ability to work with multiple projects and work streams at one time. Must be able to deliver results based upon project deadlines.
- Willing to flex daily work schedule to allow for time-zone differences for global team communications
- Strong interpersonal and communication skills
Starting salary 20000 INR, expecting salary 40000+ INR
Effebot is looking for managers who will be selling...Effebot!
Hello, we’re an international company, founded in Russia in 2013. Though we’re located in Perm, we’re expanding worldwide and now are looking for new team members!
Effebot is an effective voice mailing service which provides smart robocalls with IVR customization and speech recognition.
Effebot can be used for any outbound calls: cold callings, surveys, complaints and claims fixing, notifications, etc. It can easily and quickly inform clients about promotions, sales or news. Effebot also provides cloud solutions for inbound calls.
We are a team of 130+ great specialists, and we are happy to welcome new employees to our ranks.
You may see more on our website: https://effebot.com/">https://effebot.com/
So, if you:
- have a successful experience in cold calling sales (from 2 years);
- have an experience of remote work, enjoy it, know how to organize your work day by yourself;
- are responsible - we trust you to be our representative in your country and we want to rely on you;
- are ambitious - with good motivation you’ll be earning more;
- can fluently speak English (other languages are a nice advantage!);
- are proud to call your speech diverse and literate.
Then you’re welcome to apply to our team!
What you’ll need to do:
- search for new clients in India;
- effectively negotiate with clients’ representatives of different levels;
- develop clients;
- make a lot of outbound calls;
- work in the amoCRM system.
What you need to have:
- computer;
- access to Internet;
- USB headphones;
- willing to work and earn.
Salary conditions:
- after onboarding your salary is around 20000 INR monthly
- plus 20-30% from sales you made
- we expect you to earn at least 500-600$.
Work conditions:
- remote work (you work from your home);
- working hours 10-19 (IST);
- working days are from Monday to Friday;
- good-working onboarding process;
- career perspectives.
Requirements:
- Solid experience in Java or Golang
- Good to have exposure to ML
- Should have experience in cloud computing
- Has an ability to quickly learn and contribute in multiple codebase
- Overcomes roadblocks and requires minimal oversight
- Takes initiatives to fix issues/tech debts before assigned to him/her
- Able to deep dive into codebase and advise QA of possible regression impact
- Communicates tech decisions through design docs and tech talks
- Has delivered projects with end-to-end accountability
- Keeps track of industry trends and introduces right tech/tools for a given job
- Excellent understanding of software engineering practices, Design Patterns, Data Structures, Algorithms
- 4+ years of experience in product driven organisation
- A Bachelors or Masters degree in engineering from a reputed institute (preferably IITs, NITs, or other top engineering institutes)











