About TintED
Similar jobs
About UPS:
Moving our world forward by delivering what matters! UPS is a company with a proud past and an even brighter future. Our values define us. Our culture differentiates us. Our strategy drives us. At UPS we are customer first, people led and innovation driven. UPS’s India based Technology Development Centers will bring UPS one step closer to creating a global technology workforce that will help accelerate our digital journey and help us engineer technology solutions that drastically improve our competitive advantage in the field of Logistics.
‘Future You’ grows as a visible and valued Technology professional with UPS, driving us towards an exciting tomorrow. As a global Technology organization we can put serious resources behind your development. If you are solutions orientated, UPS Technology is the place for you. ‘Future You’ delivers ground-breaking solutions to some of the biggest logistics challenges around the globe. You’ll take technology to unimaginable places and really make a difference for UPS and our customers.
Job Summary:
The Senior Data Engineer - SQL BI supervises and participates in the development of batch and real-time data pipelines utilizing various data analytics processing frameworks in support of Business Intelligence (BI), Data Science, and web application products. This position participates in and supports the integration of data from various data sources, both internal and external. This position performs extract, transform, load (ETL) data conversions, and facilitates data cleansing and enrichment. This position performs full systems life cycle management activities, such as analysis, technical requirements, design, coding, testing, implementation of systems and applications software. This position participates and contributes to synthesizing disparate data sources to create reusable and reproducible data assets. This position contributes to application development through semantic and analytical modeling.
REQUIREMENTS
- 4 plus years of relevant professional experience
- In-depth experience with both on-premises SQL Server (SQL, SSIS, SSAS)
- Some experience in Azure (Databricks, Data Factory, Apache Spark, Python)
- Familiarity with Delta lake, Unity Catalog concepts in Databricks
- Demonstrated awareness of Data Warehouse concepts (Star Schema) and methodologies
- Experience with different types of feed (XML, JSON, etc.)
- Familiarity with Data Visualization tools (Power BI is preferred)
- Experience working within Agile Frameworks
- .NET experience is preferred
- Proficiency in writing Python or Java or C# is preferred.
Additional Information This role will be in-office 3 days a week in Chennai, India
Designation: Perception Engineer (3D)
Experience: 0 years to 8 years
Position Type: Full Time
Position Location: Hyderabad
Compensation: As Per Industry standards
About Monarch:
At Monarch, we’re leading the digital transformation of farming. Monarch Tractor augments both muscle and mind with fully loaded hardware, software, and service machinery that will spur future generations of farming technologies.
With our farmer-first mentality, we are building a smart tractor that will enhance (not replace) the existing farm ecosystem, alleviate labor availability, and cost issues, and provide an avenue for competitive organic and beyond farming by providing mechanical solutions to replace harmful chemical solutions. Despite all the cutting-edge technology we will incorporate, our tractor will still plow, still, and haul better than any other tractor in its class. We have all the necessary ingredients to develop, build and scale the Monarch Tractor and digitally transform farming around the world.
Description:
We are looking for engineers to work on applied research problems related to perception in autonomous driving of electric tractors. The team works on classical and deep learning-based techniques for computer vision. Several problems like SFM, SLAM, 3D Image processing, multiple view geometry etc. Are being solved to deploy on resource constrained hardware.
Technical Skills:
- Background in Linear Algebra, Probability and Statistics, graphical algorithms and optimization problems is necessary.
- Solid theoretical background in 3D computer vision, computational geometry, SLAM and robot perception is desired. Deep learning background is optional.
- Knowledge of some numerical algorithms or libraries among: Bayesian filters, SLAM, Eigen, Boost, g2o, PCL, Open3D, ICP.
- Experience in two view and multi-view geometry.
- Necessary Skills: Python, C++, Boost, Computer Vision, Robotics, OpenCV.
- Academic experience for freshers in Vision for Robotics is preferred.
- Experienced candidates in Robotics with no prior Deep Learning experience willing to apply their knowledge to vision problems are also encouraged to apply.
- Software development experience on low-power embedded platforms is a plus.
Responsibilities:
- Understanding engineering principles and a clear understanding of data structures and algorithms.
- Ability to understand, optimize and debug imaging algorithms.
- Ability to drive a project from conception to completion, research papers to code with disciplined approach to software development on Linux platform.
- Demonstrate outstanding ability to perform innovative and significant research in the form of technical papers, thesis, or patents.
- Optimize runtime performance of designed models.
- Deploy models to production and monitor performance and debug inaccuracies and exceptions.
- Communicate and collaborate with team members in India and abroad for the fulfillment of your duties and organizational objectives.
- Thrive in a fast-paced environment and can own the project end to end with minimum hand holding.
- Learn & adapt to new technologies & skillsets.
- Work on projects independently with timely delivery & defect free approach.
- Thesis focusing on the above skill set may be given more preference.
What you will get:
At Monarch Tractor, you’ll play a key role on a capable, dedicated, high-performing team of rock stars. Our compensation package includes a competitive salary, excellent health benefits commensurate with the role you’ll play in our success.
- Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse.
- Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs.
- Assemble large, complex data sets from third-party vendors to meet business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elasticsearch, MongoDB, and AWS technology.
- Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems.
Requirements
- 5+ years of experience in a Data Engineer role.
- Proficiency in Linux.
- Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena.
- Must have experience with Python/Scala.
- Must have experience with Big Data technologies like Apache Spark.
- Must have experience with Apache Airflow.
- Experience with data pipeline and ETL tools like AWS Glue.
- Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Key Responsibilities : ( Data Developer Python, Spark)
Exp : 2 to 9 Yrs
Development of data platforms, integration frameworks, processes, and code.
Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages
Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.
Elaborate stories in a collaborative agile environment (SCRUM or Kanban)
Familiarity with cloud platforms like GCP, AWS or Azure.
Experience with large data volumes.
Familiarity with writing rest-based services.
Experience with distributed processing and systems
Experience with Hadoop / Spark toolsets
Experience with relational database management systems (RDBMS)
Experience with Data Flow development
Knowledge of Agile and associated development techniques including:
n
Expertise in handling large amount of data through Python or PySpark
Conduct data assessment, perform data quality checks and transform data using SQL
and ETL tools
Experience of deploying ETL / data pipelines and workflows in cloud technologies and
architecture such as Azure and Amazon Web Services will be valued
Comfort with data modelling principles (e.g. database structure, entity relationships, UID
etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
A thoughtful and comfortable communicator (verbal and written) with the ability to
facilitate discussions and conduct training
Track record of strong problem-solving, requirement gathering, and leading by example
Ability to thrive in a flexible and collaborative environment
Track record of completing projects successfully on time, within budget and as per scope
This person MUST have:
- B.E Computer Science or equivalent
- 5 years experience with the Django framework
- Experience with building APIs (REST or GraphQL)
- Strong Troubleshooting and debugging skills
- React.js knowledge would be an added bonus
- Understanding on how to use a database like Postgres (prefered choice), SQLite, MongoDB, MySQL.
- Sound knowledge of object-oriented design and analysis.
- A strong passion for writing simple, clean and efficient code.
- Proficient understanding of code versioning tools Git.
- Strong communication skills.
Experience:
- Min 5 year experience
- Startup experience is a must.
Location:
- Remote developer
Timings:
- 40 hours a week but with 4 hours a day overlapping with client timezone. Typically clients are in California PST Timezone.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
Must have experience on e-commerce projects
The candidate must have Expertise in ADF(Azure data factory), well versed with python.
Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
Required skills:
Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
Fluency in Python (Pandas), PySpark, SQL, or similar
Azure data factory experience (min 12 months)
Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
Ability to work independently with demonstrated experience in project or program management
Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
Data Platform engineering at Uber is looking for a strong Technical Lead (Level 5a Engineer) who has built high quality platforms and services that can operate at scale. 5a Engineer at Uber exhibits following qualities:
- Demonstrate tech expertise › Demonstrate technical skills to go very deep or broad in solving classes of problems or creating broadly leverageable solutions.
- Execute large scale projects › Define, plan and execute complex and impactful projects. You communicate the vision to peers and stakeholders.
- Collaborate across teams › Domain resource to engineers outside your team and help them leverage the right solutions. Facilitate technical discussions and drive to a consensus.
- Coach engineers › Coach and mentor less experienced engineers and deeply invest in their learning and success. You give and solicit feedback, both positive and negative, to others you work with to help improve the entire team.
- Tech leadership › Lead the effort to define the best practices in your immediate team, and help the broader organization establish better technical or business processes.
What You’ll Do
- Build a scalable, reliable, operable and performant data analytics platform for Uber’s engineers, data scientists, products and operations teams.
- Work alongside the pioneers of big data systems such as Hive, Yarn, Spark, Presto, Kafka, Flink to build out a highly reliable, performant, easy to use software system for Uber’s planet scale of data.
- Become proficient of multi-tenancy, resource isolation, abuse prevention, self-serve debuggability aspects of a high performant, large scale, service while building these capabilities for Uber's engineers and operation folks.
What You’ll Need
- 7+ years experience in building large scale products, data platforms, distributed systems in a high caliber environment.
- Architecture: Identify and solve major architectural problems by going deep in your field or broad across different teams. Extend, improve, or, when needed, build solutions to address architectural gaps or technical debt.
- Software Engineering/Programming: Create frameworks and abstractions that are reliable and reusable. advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, Go, and Scala.
- Data Engineering: Expertise in one of the big data analytics technologies we currently use such as Apache Hadoop (HDFS and YARN), Apache Hive, Impala, Drill, Spark, Tez, Presto, Calcite, Parquet, Arrow etc. Under the hood experience with similar systems such as Vertica, Apache Impala, Drill, Google Borg, Google BigQuery, Amazon EMR, Amazon RedShift, Docker, Kubernetes, Mesos etc.
- Execution & Results: You tackle large technical projects/problems that are not clearly defined. You anticipate roadblocks and have strategies to de-risk timelines. You orchestrate work that spans multiple teams and keep your stakeholders informed.
- A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others’ candid feedback for continuous improvement.
- Business acumen: You understand requirements beyond the written word. Whether you’re working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience.
Role and Responsibilities
- Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality
- Build robust RESTful APIs that serve data and insights to DataWeave and other products
- Design user interaction workflows on our products and integrating them with data APIs
- Help stabilize and scale our existing systems. Help design the next generation systems.
- Scale our back end data and analytics pipeline to handle increasingly large amounts of data.
- Work closely with the Head of Products and UX designers to understand the product vision and design philosophy
- Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope.
Skills and Requirements
- 8- 15 years of experience building and scaling APIs and web applications.
- Experience building and managing large scale data/analytics systems.
- Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices.
- Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python.
- Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic.
- Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc.
- Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies.
- Working knowledge linux server administration as well as the AWS ecosystem is desirable.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.