xcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjn xcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjnxcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjnxcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjnxcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjnxcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjnxcygvhbjnmrtyguhij ctfguhijctyguhj fcgvhjfcgvhbjn
About bodokimcom
Similar jobs
About Company
Our client is a well-funded construction Tech Start-up by a renowned group.
Responsibilities
- Gather intelligence from key business leaders about needs and future growth
- Partner with the internal IT team to ensure each project meets a specific need and resolves successfully
- Assume responsibility for project tasks and ensure they are completed in a timely fashion
- Evaluate, test and recommend new opportunities for enhancing our software, hardware and IT processes
- Compile and distribute reports on application development and deployment
- Design and execute A/B testing procedures to extract data from test runs
- Evaluate and conclude data related to customer behavior
- Consult with the executive team and the IT department on the newest technology and its implications in the industry
Requirements :
- Bachelor's Degree in Software Development, Computer Engineering, Project Management or a related field
- 3+ years experience in technology development and deployment
Role: Principal Software Engineer
We looking for a passionate Principle Engineer - Analytics to build data products that extract valuable business insights for efficiency and customer experience. This role will require managing, processing and analyzing large amounts of raw information and in scalable databases. This will also involve developing unique data structures and writing algorithms for the entirely new set of products. The candidate will be required to have critical thinking and problem-solving skills. The candidates must be experienced with software development with advanced algorithms and must be able to handle large volume of data. Exposure with statistics and machine learning algorithms is a big plus. The candidate should have some exposure to cloud environment, continuous integration and agile scrum processes.
Responsibilities:
• Lead projects both as a principal investigator and project manager, responsible for meeting project requirements on schedule
• Software Development that creates data driven intelligence in the products which deals with Big Data backends
• Exploratory analysis of the data to be able to come up with efficient data structures and algorithms for given requirements
• The system may or may not involve machine learning models and pipelines but will require advanced algorithm development
• Managing, data in large scale data stores (such as NoSQL DBs, time series DBs, Geospatial DBs etc.)
• Creating metrics and evaluation of algorithm for better accuracy and recall
• Ensuring efficient access and usage of data through the means of indexing, clustering etc.
• Collaborate with engineering and product development teams.
Requirements:
• Master’s or Bachelor’s degree in Engineering in one of these domains - Computer Science, Information Technology, Information Systems, or related field from top-tier school
• OR Master’s degree or higher in Statistics, Mathematics, with hands on background in software development.
• Experience of 8 to 10 year with product development, having done algorithmic work
• 5+ years of experience working with large data sets or do large scale quantitative analysis
• Understanding of SaaS based products and services.
• Strong algorithmic problem-solving skills
• Able to mentor and manage team and take responsibilities of team deadline.
Skill set required:
• In depth Knowledge Python programming languages
• Understanding of software architecture and software design
• Must have fully managed a project with a team
• Having worked with Agile project management practices
• Experience with data processing analytics and visualization tools in Python (such as pandas, matplotlib, Scipy, etc.)
• Strong understanding of SQL and querying to NoSQL database (eg. Mongo, Casandra, Redis
companies uncover the 3% of active buyers in their target market. It evaluates
over 100 billion data points and analyzes factors such as buyer journeys, technology
adoption patterns, and other digital footprints to deliver market & sales intelligence.
Its customers have access to the buying patterns and contact information of
more than 17 million companies and 70 million decision makers across the world.
Role – Data Engineer
Responsibilities
Work in collaboration with the application team and integration team to
design, create, and maintain optimal data pipeline architecture and data
structures for Data Lake/Data Warehouse.
Work with stakeholders including the Sales, Product, and Customer Support
teams to assist with data-related technical issues and support their data
analytics needs.
Assemble large, complex data sets from third-party vendors to meet business
requirements.
Identify, design, and implement internal process improvements: automating
manual processes, optimizing data delivery, re-designing infrastructure for
greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and
loading of data from a wide variety of data sources using SQL, Elasticsearch,
MongoDB, and AWS technology.
Streamline existing and introduce enhanced reporting and analysis solutions
that leverage complex data sources derived from multiple internal systems.
Requirements
5+ years of experience in a Data Engineer role.
Proficiency in Linux.
Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
Must have experience with Python/Scala.
Must have experience with Big Data technologies like Apache Spark.
Must have experience with Apache Airflow.
Experience with data pipeline and ETL tools like AWS Glue.
Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Data Engineer- Senior
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What are you going to do?
Design & Develop high performance and scalable solutions that meet the needs of our customers.
Closely work with the Product Management, Architects and cross functional teams.
Build and deploy large-scale systems in Java/Python.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.
Follow best practices that can be adopted in Bigdata stack.
Use your engineering experience and technical skills to drive the features and mentor the engineers.
What are we looking for ( Competencies) :
Bachelor’s degree in computer science, computer engineering, or related technical discipline.
Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.
Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.
Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.
Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.
Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.
Ability to work with distributed teams in a collaborative and productive manner.
Benefits:
Competitive Salary Packages and benefits.
Collaborative, lively and an upbeat work environment with young professionals.
Job Category: Development
Job Type: Full Time
Job Location: Bangalore
In this role, we are looking for:
- A problem-solving mindset with the ability to understand business challenges and how to apply your analytics expertise to solve them.
- The unique person who can present complex mathematical solutions in a simple manner that most will understand, using data visualization techniques to tell a story with data.
- An individual excited by innovation and new technology and eager to finds ways to employ these innovations in practice.
- A team mentality, empowered by the ability to work with a diverse set of individuals.
- A passion for data, with a particular emphasis on data visualization.
Basic Qualifications
- A Bachelor’s degree in Data Science, Math, Statistics, Computer Science or related field with an emphasis on data analytics.
- 5+ Years professional experience, preferably in a data analyst / data scientist role or similar, with proven results in a data analyst role.
- 3+ Years professional experience in a leadership role guiding high-performing, data-focused teams with a track record of building and developing talent.
- Proficiency in your statistics / analytics / visualization tool of choice, but preferably in the Microsoft Azure Suite, including PowerBI and/or AzureML.
- B.E Computer Science or equivalent.
- In-depth knowledge of machine learning algorithms and their applications including
practical experience with and theoretical understanding of algorithms for classification,
regression and clustering.
- Hands-on experience in computer vision and deep learning projects to solve real world
problems involving vision tasks such as object detection, Object tracking, instance
segmentation, activity detection, depth estimation, optical flow, multi-view geometry,
domain adaptation etc.
- Strong understanding of modern and traditional Computer Vision Algorithms.
- Experience in one of the Deep Learning Frameworks / Networks: PyTorch, TensorFlow,
Darknet (YOLO v4 v5), U-Net, Mask R-CNN, EfficientDet, BERT etc.
- Proficiency with CNN architectures such as ResNet, VGG, UNet, MobileNet, pix2pix,
and Cycle GAN.
- Experienced user of libraries such as OpenCV, scikit-learn, matplotlib and pandas.
- Ability to transform research articles into working solutions to solve real-world problems.
- High proficiency in Python programming knowledge.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker
containers, CI/CD tools).
- Strong communication skills.
- Expert knowledge of IBM DB2 V11.5 installations, configurations & administration in Linux systems.
- Expert level knowledge in Database restores including redirected restore & backup concepts.
- Excellent understanding of database performance monitoring techniques, fine-tuning, and able to perform performance checks & query optimization
- Good knowledge of utilities like import, load & export under high volume conditions.
- Ability to tune SQLs using db2advisor & db2explain.
- Ability to troubleshoot database issues using db2diag, db2pd, db2dart, db2top tec.
- Administration of database objects.
- Capability to review & assess features or upgrades to existing components.
- Experience in validating security aspects on a confidential database.
- Hands-on experience in SSL communication setup, strong access control, and database hardening.
- Experience in performing productive DB recovery and validating crash recovery.
- Experience in handling incidents & opening DB2 support tickets.
- Experience in deploying a special build DB2 version from DB2 support.
- Worked in environments such as 12x5 supports of production database services.
- Excellent problem-solving skills, analytical skills.
- Validate security aspects on a confidential database
- SSL communication setup, strong access control; database hardening
- Validate Crash recovery
- perform productive DB recovery
- On incidents open db2 support tickets
- Deploy a special build DB2 version from db2 support
Good to have:
- Experience in handling application servers (WebSphere, WebLogic, Jboss, etc.) in highly available Production environments.
- Experience in maintenance, patching, and installing updates on WebSphere Servers in the Production environment.
- Able to handle installation/deployment of the product (JAR/EAR/WAR) independently.
- Knowledge of ITIL concepts (Service operation & transition)
Soft skills:
- Ability to work with the global team (co-located staffing).
- Carries learning attitude, should be an individual contributor and must have excellent communication skills.
- Support: 12/5 support is required (on a rotational basis).
- We are looking for a Data Engineer with 3-5 years experience in Python, SQL, AWS (EC2, S3, Elastic Beanstalk, API Gateway), and Java.
- The applicant must be able to perform Data Mapping (data type conversion, schema harmonization) using Python, SQL, and Java.
- The applicant must be familiar with and have programmed ETL interfaces (OAUTH, REST API, ODBC) using the same languages.
- The company is looking for someone who shows an eagerness to learn and who asks concise questions when communicating with teammates.
Object-oriented languages (e.g. Python, PySpark, Java, C#, C++ ) and frameworks (e.g. J2EE or .NET)
Only a solid grounding in computer engineering, Unix, data structures and algorithms would enable you to meet this challenge. 7+ years of experience architecting, developing, releasing, and maintaining large-scale big data platforms on AWS or GCP Understanding of how Big Data tech and NoSQL stores like MongoDB, HBase/HDFS, ElasticSearch synergize to power applications in analytics, AI and knowledge graphs Understandingof how data processing models, data location patterns, disk IO, network IO, shuffling affect large scale text processing - feature extraction, searching etc Expertise with a variety of data processing systems, including streaming, event, and batch (Spark, Hadoop/MapReduce) 5+ years proficiency in configuring and deploying applications on Linux-based systems 5+ years of experience Spark - especially Pyspark for transforming large non-structured text data, creating highly optimized pipelines Experience with RDBMS, ETL techniques and frameworks (Sqoop, Flume) and big data querying tools (Pig, Hive) Stickler of world class best practices, uncompromising on the quality of engineering, understand standards and reference architectures and deep in Unix philosophy with appreciation of big data design patterns, orthogonal code design and functional computation models |